Opcode/Instruction | Op /En | 64/32 bit Mode Support | CPUID Feature Flag | Description |
---|---|---|---|---|
F3 0F 5E /r DIVSS xmm1, xmm2/m32 | A | V/V | SSE | Divide low single precision floating-point value in xmm1 by low single precision floating-point value in xmm2/m32. |
VEX.LIG.F3.0F.WIG 5E /r VDIVSS xmm1, xmm2, xmm3/m32 | B | V/V | AVX | Divide low single precision floating-point value in xmm2 by low single precision floating-point value in xmm3/m32. |
EVEX.LLIG.F3.0F.W0 5E /r VDIVSS xmm1 {k1}{z}, xmm2, xmm3/m32{er} | C | V/V | AVX512F | Divide low single precision floating-point value in xmm2 by low single precision floating-point value in xmm3/m32. |
Op/En | Tuple Type | Operand 1 | Operand 2 | Operand 3 | Operand 4 |
---|---|---|---|---|---|
A | N/A | ModRM:reg (r, w) | ModRM:r/m (r) | N/A | N/A |
B | N/A | ModRM:reg (w) | VEX.vvvv (r) | ModRM:r/m (r) | N/A |
C | Tuple1 Scalar | ModRM:reg (w) | EVEX.vvvv (r) | ModRM:r/m (r) | N/A |
Divides the low single precision floating-point value in the first source operand by the low single precision floating-point value in the second source operand, and stores the single precision floating-point result in the destination operand. The second source operand can be an XMM register or a 32-bit memory location.
128-bit Legacy SSE version: The first source operand and the destination operand are the same. Bits (MAXVL-1:32) of the corresponding YMM destination register remain unchanged.
VEX.128 encoded version: The first source operand is an xmm register encoded by VEX.vvvv. The three high-order doublewords of the destination operand are copied from the first source operand. Bits (MAXVL-1:128) of the desti-nation register are zeroed.
EVEX.128 encoded version: The first source operand is an xmm register encoded by EVEX.vvvv. The doubleword elements of the destination operand at bits 127:32 are copied from the first source operand. Bits (MAXVL-1:128) of the destination register are zeroed.
EVEX version: The low doubleword element of the destination is updated according to the writemask.
Software should ensure VDIVSS is encoded with VEX.L=0. Encoding VDIVSS with VEX.L=1 may encounter unpre-dictable behavior across different processor generations.
VDIVSS (EVEX Encoded Version)
IF (EVEX.b = 1) AND SRC2 *is a register* THEN SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC); ELSE SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC); FI; IF k1[0] or *no writemask* THEN DEST[31:0] := SRC1[31:0] / SRC2[31:0] ELSE IF *merging-masking* ; merging-masking THEN *DEST[31:0] remains unchanged* ELSE ; zeroing-masking THEN DEST[31:0] := 0 FI; FI; DEST[127:32] := SRC1[127:32] DEST[MAXVL-1:128] := 0
VDIVSS (VEX.128 Encoded Version)
DEST[31:0] := SRC1[31:0] / SRC2[31:0] DEST[127:32] := SRC1[127:32] DEST[MAXVL-1:128] := 0
DIVSS (128-bit Legacy SSE Version)
DEST[31:0] := DEST[31:0] / SRC[31:0] DEST[MAXVL-1:32] (Unmodified)
VDIVSS __m128 _mm_mask_div_ss(__m128 s, __mmask8 k, __m128 a, __m128 b);
VDIVSS __m128 _mm_maskz_div_ss( __mmask8 k, __m128 a, __m128 b);
VDIVSS __m128 _mm_div_round_ss( __m128 a, __m128 b, int);
VDIVSS __m128 _mm_mask_div_round_ss(__m128 s, __mmask8 k, __m128 a, __m128 b, int);
VDIVSS __m128 _mm_maskz_div_round_ss( __mmask8 k, __m128 a, __m128 b, int);
DIVSS __m128 _mm_div_ss(__m128 a, __m128 b);
Overflow, Underflow, Invalid, Divide-by-Zero, Precision, Denormal.
VEX-encoded instructions, see Table 2-20, “Type 3 Class Exception Conditions.” |
EVEX-encoded instructions, see Table 2-47, “Type E3 Class Exception Conditions.” |