Opcode/Instruction | Op /En | 64/32 bit Mode Support | CPUID Feature Flag | Description |
---|---|---|---|---|
66 0F 59 /r MULPD xmm1, xmm2/m128 | A | V/V | SSE2 | Multiply packed double precision floating-point values in xmm2/m128 with xmm1 and store result in xmm1. |
VEX.128.66.0F.WIG 59 /r VMULPD xmm1,xmm2, xmm3/m128 | B | V/V | AVX | Multiply packed double precision floating-point values in xmm3/m128 with xmm2 and store result in xmm1. |
VEX.256.66.0F.WIG 59 /r VMULPD ymm1, ymm2, ymm3/m256 | B | V/V | AVX | Multiply packed double precision floating-point values in ymm3/m256 with ymm2 and store result in ymm1. |
EVEX.128.66.0F.W1 59 /r VMULPD xmm1 {k1}{z}, xmm2, xmm3/m128/m64bcst | C | V/V | AVX512VL AVX512F | Multiply packed double precision floating-point values from xmm3/m128/m64bcst to xmm2 and store result in xmm1. |
EVEX.256.66.0F.W1 59 /r VMULPD ymm1 {k1}{z}, ymm2, ymm3/m256/m64bcst | C | V/V | AVX512VL AVX512F | Multiply packed double precision floating-point values from ymm3/m256/m64bcst to ymm2 and store result in ymm1. |
EVEX.512.66.0F.W1 59 /r VMULPD zmm1 {k1}{z}, zmm2, zmm3/m512/m64bcst{er} | C | V/V | AVX512F | Multiply packed double precision floating-point values in zmm3/m512/m64bcst with zmm2 and store result in zmm1. |
Op/En | Tuple Type | Operand 1 | Operand 2 | Operand 3 | Operand 4 |
---|---|---|---|---|---|
A | N/A | ModRM:reg (r, w) | ModRM:r/m (r) | N/A | N/A |
B | N/A | ModRM:reg (w) | VEX.vvvv (r) | ModRM:r/m (r) | N/A |
C | Full | ModRM:reg (w) | EVEX.vvvv (r) | ModRM:r/m (r) | N/A |
Multiply packed double precision floating-point values from the first source operand with corresponding values in the second source operand, and stores the packed double precision floating-point results in the destination operand.
EVEX encoded versions: The first source operand (the second operand) is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 64-bit memory location. The destination operand is a ZMM/YMM/XMM register conditionally updated with writemask k1.
VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. Bits (MAXVL-1:256) of the corre-sponding destination ZMM register are zeroed.
VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the destination YMM register destination are zeroed.
128-bit Legacy SSE version: The second source can be an XMM register or an 128-bit memory location. The desti-nation is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified.
VMULPD (EVEX Encoded Versions)
(KL, VL) = (2, 128), (4, 256), (8, 512) IF (VL = 512) AND (EVEX.b = 1) AND SRC2 *is a register* THEN SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(EVEX.RC); ELSE SET_ROUNDING_MODE_FOR_THIS_INSTRUCTION(MXCSR.RC); FI; FOR j := 0 TO KL-1 i := j * 64 IF k1[j] OR *no writemask* THEN IF (EVEX.b = 1) AND (SRC2 *is memory*) THEN DEST[i+63:i] := SRC1[i+63:i] * SRC2[63:0] ELSE DEST[i+63:i] := SRC1[i+63:i] * SRC2[i+63:i] FI; ELSE IF *merging-masking* ; merging-masking THEN *DEST[i+63:i] remains unchanged* ELSE ; zeroing-masking DEST[i+63:i] := 0 FI FI; ENDFOR DEST[MAXVL-1:VL] := 0
VMULPD (VEX.256 Encoded Version)
DEST[63:0] := SRC1[63:0] * SRC2[63:0] DEST[127:64] := SRC1[127:64] * SRC2[127:64] DEST[191:128] := SRC1[191:128] * SRC2[191:128] DEST[255:192] := SRC1[255:192] * SRC2[255:192] DEST[MAXVL-1:256] := 0; . VMULPD (VEX.128 Encoded Version) DEST[63:0] := SRC1[63:0] * SRC2[63:0] DEST[127:64] := SRC1[127:64] * SRC2[127:64] DEST[MAXVL-1:128] := 0
MULPD (128-bit Legacy SSE Version)
DEST[63:0] := DEST[63:0] * SRC[63:0] DEST[127:64] := DEST[127:64] * SRC[127:64] DEST[MAXVL-1:128] (Unmodified)
VMULPD __m512d _mm512_mul_pd( __m512d a, __m512d b);
VMULPD __m512d _mm512_mask_mul_pd(__m512d s, __mmask8 k, __m512d a, __m512d b);
VMULPD __m512d _mm512_maskz_mul_pd( __mmask8 k, __m512d a, __m512d b);
VMULPD __m512d _mm512_mul_round_pd( __m512d a, __m512d b, int);
VMULPD __m512d _mm512_mask_mul_round_pd(__m512d s, __mmask8 k, __m512d a, __m512d b, int);
VMULPD __m512d _mm512_maskz_mul_round_pd( __mmask8 k, __m512d a, __m512d b, int);
VMULPD __m256d _mm256_mul_pd (__m256d a, __m256d b);
MULPD __m128d _mm_mul_pd (__m128d a, __m128d b);
Overflow, Underflow, Invalid, Precision, Denormal.
Non-EVEX-encoded instruction, see Table 2-19, “Type 2 Class Exception Conditions.”
EVEX-encoded instruction, see Table 2-46, “Type E2 Class Exception Conditions.”