-
Software
- UL Procyon benchmark suite
- AI Image Generation Benchmark
- AI Inference Benchmark for Android
- AI Computer Vision Benchmark
- Battery Life Benchmark
- One-Hour Battery Consumption Benchmark
- Office Productivity Benchmark
- Photo Editing Benchmark
- Video Editing Benchmark
- Testdriver
- 3DMark
- 3DMark for Android
- 3DMark for iOS
- PCMark 10
- PCMark for Android
- VRMark
- MORE...
-
Services
-
Support
-
Insights
-
Compare
-
More
UL Procyon AI Image Generation
Benchmark GPU AI Image Generation Performance
The Procyon AI Image Generation Benchmark provides a consistent, accurate, and understandable workload for measuring the inference performance of powerful on-device AI accelerators such as high-end discrete GPUs. This benchmark was developed in partnership with multiple key industry members to ensure it produces fair and comparable results across all supported hardware.
The benchmark includes two tests for measuring the performance of both mid-range and high-end discrete graphics cards. We also hope to add more AI image generation tests in the future to support other performance categories. The Stable Diffusion XL (FP16) test is our most demanding AI inference workload, and only the latest high-end GPUs meet the minimum requirements to run it. For moderately powerful discrete GPUs, we recommend the Stable Diffusion 1.5 (FP16) test.
The UL Procyon AI Image Generation Benchmark can be configured to use a selection of different inference engines, and by default uses the recommended optimal inference engine for the system’s hardware.
Buy nowFeatures
- Heavy test built around an image generation workload, using state-of-the-art neural networks.
- Designed to measure inference performance of powerful AI accelerators, such as discrete GPUs.
- Benchmark with NVIDIA® TensorRT™, Intel® OpenVINO™, and ONNX with DirectML.
- Verify inference engine implementation and compatibility.
- Simple to set up and use via the UL Procyon application or via command-line.
- Test with multiple versions of the Stable Diffusion AI model.
- Compare up to 4 results side-by-side in the app.
Inference Engine Performance
With the UL Procyon AI Image Generation Benchmark, you can measure the performance of dedicated AI processing hardware and verify inference engine implementation quality with tests based on a heavy AI image generation workload.
Designed for Professionals
We created our UL Procyon AI Inference Benchmarks for engineering teams who need independent, standardized tools for assessing the general AI performance of inference engine implementations and dedicated hardware.
Fast and easy to use
The benchmark is easy to install and run—no complicated configuration is required. Run the benchmark using the UL Procyon application or via command-line. View benchmark scores and charts or export detailed result files for further analysis.
Developed with Industry expertise
UL Procyon benchmarks are designed for industry, enterprise, and press use, with tests and features created specifically for professional users. The UL Procyon AI Image Generation Benchmark was designed and developed with industry partners through the UL Benchmark Development Program (BDP). The BDP is an initiative from UL Solutions that aims to create relevant and impartial benchmarks by working in close cooperation with program members.
Benchmark details
Stable diffusion, released in 2022, made using AI for text-to-image generation on their own hardware accessible for the everyday consumer. Given its ease of access, wide usage, and creative aspect, text-to-image generation quickly became one of the most memorable AI use cases for the public.
The AI Image Generation Benchmark uses a set of standardized text prompts for a reliable and consistent AI image generation workload. Results provide an overall score for easy comparison, as well as further detailed scores and the generated images for closer inspections of performance and quality.
By default, the benchmark generates 16 images in batches, with the batch size differing based on the stable diffusion version. Stable Diffusion 1.5 generates 512x512 images with a batch size of 4, with the heavier Stable Diffusion XL test generating 1024x1024 images with a batch size of 1.
Results and insights
Benchmark scores
Compare AI Inference performance with two different versions of the Stable Diffusion model.Detailed scores
Inspect generated images, and get detailed scores for each image generation batch.Hardware monitoring
Get detailed metrics on how CPU and GPU temperatures, clock speeds and usage change during the benchmark run.Free trial
Request trialSite license
Get quote Press license- Annual site license for UL Procyon AI Image Generation Benchmark.
- Unlimited number of users.
- Unlimited number of devices.
- Priority support via email and telephone.
Benchmark Development Program
Contact us Find out moreContact us
The Benchmark Development Program™ is an initiative from UL Solutions for building partnerships with technology companies.
OEMs, ODMs, component manufacturers and their suppliers are invited to join us in developing new AI processing benchmarks. Please contact us for details.
Minimum system requirements
OS | Windows 10, 64-bit or Windows 11 |
---|---|
Processor | 2 GHz dual-core CPU |
Memory | 16 GB |
Storage | 20 GB (75 GB recommended) |
Stable Diffusion XL requirements
TensorRT | 10GB VRAM |
---|---|
OpenVINO | 8GB VRAM |
ONNX Runtime | 16GB VRAM |
Stable Diffusion 1.5 requirements
Discrete GPU | 8GB VRAM |
---|---|
Integrated GPU | 32GB RAM |
Support
Latest 1.0.95 | June 6, 2024
Languages
- English
- German
- Japanese
- Portuguese (Brazil)
- Simplified Chinese
- Spanish