Connect with us

Technology

Three athletes nominated to compete for Canada in Para canoe at Paris 2024 Paralympic Games

Published

on

– Hennessy, Scarff, and St-Pierre set for second Paralympic appearances

Paris 2024 Paralympic Games takes place August 28 to September 8

OTTAWA, ON, June 24, 2024 /CNW/ – Three athletes – Brianna Hennessy, Erica Scarff, and Mathieu St-Pierre – have been nominated to represent Canada in the sport of Para canoe at the Paris 2024 Paralympic Games, the Canadian Paralympic Committee and Canoe Kayak Canada announced Monday.  

Paris 2024 Canadian Paralympic Team – PARA CANOE

Brianna Hennessy – Ottawa, ONErica Scarff – Mississauga, ONMathieu St-Pierre – Shawinigan, QC

Paris will be the second Paralympic Games appearance for all three athletes. Hennessy and St-Pierre competed at Tokyo 2020 while Scarff, at the age of 20, was part of the first-ever Para canoe roster to compete at the Games, as the sport made its Paralympic debut in 2016.

“I am so excited for my second Paralympic Games,” said Hennessy. “Our team has been working extremely hard to become a medal potential for these Games. I hope that we can make our country proud and bring home some shiny hardware! I can’t wait to wear the Canadian flag with the utmost pride!”

Both Hennessy and Scarff reached the podium at the 2023 Para Canoe World Championships, earning Paralympic quota spots for Canada. Hennessy took silver in the women’s VL2 200m and bronze in the KL1 200m, while Scarff finished second in the women’s VL3 200m.

At the 2024 worlds last month, Hennessy secured another silver medal in the VL2 event while Scarff was fourth in the VL3.

“We are in our final push towards Paris and there is lots of preparation still ahead,” said Scarff. “I am looking forward to showcasing our hard work and enjoying our Paralympic moment!”

A final Paralympic quota spot for the nation was earned by St-Pierre at the 2024 worlds following a seventh-place finish in the men’s VL2 200m.

“For an athlete, participating in the Paralympic Games is the pinnacle of their career,” said St-Pierre. “It is truly an honour to be able to compete for your country and to give everything in order to bring back a medal. It is also an opportunity to show all those who have made sacrifices for us that with their support we can achieve the ultimate goal!”

Para canoe is contested in two different boats – K classes race in kayaks while V classes race in va’a, an outrigger canoe with a support float. In Paris, Hennessy will compete in both the women’s VL2 and KL1 200m events, while Scarff will feature in the women’s VL3 200m race and St-Pierre in the men’s VL2 200m.

Para canoe events will take place September 6-8 at Vaires-sur-Marne Nautical Stadium in Paris. The heats will race day one, with semifinals and finals running September 7 and 8.  

This will be the third Games Para canoe will be on the Paralympic program, first debuting in 2016. Canada is still looking for its first medal in the sport.

“A huge congratulations to Brianna, Erica, and Mathieu on their nomination to compete at this summer’s Paralympic Games,” said Josh Vander Vies, co-chef de mission, Paris 2024 Canadian Paralympic Team. “All three are experienced racers, and it has been so exciting to follow their international success over the past few years leading into the Games. I know they will be ready for Paris.”

“We are so thrilled to welcome three outstanding athletes in Brianna, Erica, and Mathieu to the Canadian Paralympic Team,” said Karolina Wisniewska, co-chef de mission, Paris 2024 Canadian Paralympic Team. “Para canoe is still a fairly new sport at the Paralympic Games, and I cannot wait to see it in person. I’ll be there to cheer them on!”

The Paris 2024 Paralympic Games will take place August 28 to September 8 in Paris, France. Canada is expecting to send a team of approximately 140 athletes.

Prior to being officially named to the Canadian Paralympic Team, all nominations are subject to approval by the Canadian Paralympic Committee. The current list of nominated athletes can be found HERE. The approved final roster will be announced closer to the start of the Games.

About the Canadian Paralympic Committee: Paralympic.ca 

About Canoe Kayak Canada: Canoekayak.ca

SOURCE Canadian Paralympic Committee (Sponsorships)

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Blue Owl, Chirisa Technology Parks and PowerHouse Data Centers Announce Next Phase of $5 Billion Joint Venture Development Partnership

Published

on

By

CHESTERFIELD, Va., May 27, 2025 /PRNewswire/ — Blue Owl Capital managed funds (“Blue Owl”), Chirisa Technology Parks (“CTP”), and PowerHouse Data Centers (“PowerHouse”) today announced the closing of a $750 million transaction in their landmark joint venture development partnership.  The partnership was launched in August 2024, with capacity to deploy up to $5 billion of capital for turnkey AI/HPC data center developments supporting CoreWeave and other hyperscale and enterprise data center customers.

Construction under the program at CTP’s 350-acre campus in Chesterfield, VA started in 2024 for an initial 120MW of new critical facilities, with delivery scheduled in 2025 and 2026. The new facilities are cornerstone developments in CoreWeave’s rapidly scaling infrastructure footprint. CoreWeave is one of the fastest-growing cloud infrastructure providers for AI workloads.

The CTP campus features cutting-edge design standards, purpose-built to support dense GPU clusters and other advanced computing technologies required for large-scale artificial intelligence customers.  The campus integrates CTP’s proprietary ‘direct-on-chip’ liquid cooling design, which is almost twice as energy efficient as traditional air-cooled systems. This innovative cooling solution reduces energy consumption and underpins  environmentally responsible hyperscale data center operations.  CTP and PowerHouse are committed to delivering high-performance infrastructure that aligns with sustainability goals.   

“This next stage of the partnership between Blue Owl, CTP and PowerHouse represents another groundbreaking transaction focused on rapidly delivering large scale capacity for the AI ecosystem” said Lee Hayes, President & CEO of CTP.

Doug Fleit, CEO and Co-founder of PowerHouse, continued, “This closing underscores our partnership’s continuum and commitment to building scalable, sustainable, and high-performance digital campuses that support the deployment of AI infrastructure.”

Located in Chesterfield, VA, the campus facility offers robust access to power, fiber, and a favorable regulatory environment. The project emphasizes sustainability and innovation, integrating advanced cooling systems, high-density compute design, and long-term power procurement strategies aligned with renewable energy goals.

“This is another pivotal milestone in our $5 billion strategic partnership with CTP and PowerHouse,” said Marc Zahr, Global Head of Real Assets at Blue Owl. “With the closing of this second $750 million tranche, we’re delivering on our vision to create foundational infrastructure for the next generation of AI-native cloud companies like CoreWeave.”

The venture combines PowerHouse and CTP’s deep expertise in development, construction and operations, and Blue Owl’s financial strength. Together, the consortium is well- positioned to support hyperscale deployments with unmatched speed, efficiency, and scale.

About Blue Owl
Blue Owl (NYSE: OWL) is a leading asset manager that is redefining alternatives®. With $273 billion in assets under management as of March 31, 2025, we invest across three multi-strategy platforms: Credit, GP Strategic Capital, and Real Assets. Anchored by a strong permanent capital base, we provide businesses with private capital solutions to drive long-term growth and offer institutional investors, individual investors, and insurance companies differentiated alternative investment opportunities that aim to deliver strong performance, risk-adjusted returns, and capital preservation. Together with over 1,200 experienced professionals globally, Blue Owl brings the vision and discipline to create the exceptional. To learn more, visit www.blueowl.com

About Chirisa Technology Parks
Chirisa Technology Parks is focused on the rapid delivery of high performance, leading-edge facilities to support hyperscale, HPC and AI customers across North America and Europe. With over 25 years of experience in Digital Infrastructure, CTP and its predecessors have developed, owned, and operated over 40 data center assets focused on large scale enterprise and hyperscale deployments. CTP currently offers over 500,000 SF of purpose-built data center capacity, with a pipeline in excess of 1.6 GW under development in the USA. CTP’s bespoke designs and rapid delivery process are focused on high-efficiency, leading-edge deployments. CTP has broad capability to partner and/ or operate critical facilities, offering a strong track record in build-to-suit hyperscale powered shells, HPC and AI-focused high-density deployments, turnkey data center solutions for Cloud customers, and bespoke edge deployments in major metropolitan areas. CTP is dedicated to delivering high technology campuses with a positive impact on the communities and environment in which it operates.

About PowerHouse Data Centers
PowerHouse Data Centers, fully owned and operated by American Real Estate Partners (AREP), is a pioneering developer and owner of next-generation data centers, providing sophisticated real estate solutions for hyperscalers that meet their market, data, utility, and space demands. Founded in 2021 with a primary focus on Northern Virginia, the world’s largest data center market, PowerHouse has  strategically expanded into key markets across the United States. Today, PowerHouse is an established leader in world-class data center development with 86 data centers underway or in planning, representing more than 24 million square feet and 6.1 GW in six major markets. PowerHouse owns its land sites, offering flexible next-generation data center models, with unparalleled speed-to-market. As disruptors setting new industry standards, PowerHouse leverages proven leadership, technical expertise, and strategic partnerships to drive innovation. Drawing from valuable real estate and industry relationships, PowerHouse adeptly identifies and transforms land sites, delivering state-of-the-art BTS, powered shell, and full turnkey deployments at scale. PowerHouse’s full suite of development services integrates asset strategy, fast-track approvals, infrastructure, on-site power procurement, and sustainable building practices into every project. Visit our newsroom for more information, and follow us on LinkedIn, YouTube, and X.

View original content to download multimedia:https://www.prnewswire.com/news-releases/blue-owl-chirisa-technology-parks-and-powerhouse-data-centers-announce-next-phase-of-5-billion-joint-venture-development-partnership-302466950.html

SOURCE Chirisa Piscataway Inc.

Continue Reading

Technology

Unleashing AI Potential: The Power of Your Own Local Supercomputer

Published

on

By

RIVERSIDE, Calif., May 28, 2025 /PRNewswire/ — In the rapidly evolving landscape of artificial intelligence (AI) and deep learning, access to robust computing power is paramount. While cloud-based GPU solutions offer undeniable flexibility, a growing number of AI professionals, researchers, and startups are discovering the profound benefits of investing in their own local GPU servers. This shift isn’t just about preference; it’s about unlocking a powerful, private, and predictable environment that can truly accelerate the pace of innovation.

Owning a local GPU server for deep learning and AI model training presents a compelling set of advantages that directly address many of the challenges faced when relying solely on external resources.

Long-Term Cost-Effectiveness: A Smart Investment

At first glance, the upfront cost of a dedicated GPU server might seem substantial, especially when compared to the pay-as-you-go model of cloud services. However, for sustained and intensive AI workloads, this initial investment quickly transforms into significant long-term savings. Unlike cloud GPUs, where every minute of usage, including idle time or unexpected interruptions, incurs charges, owning your hardware means your operational costs are dramatically reduced over time. Consider the example of Autonomous Inc.’s Brainy workstation: users can save thousands of dollars within just a few months compared to continuous cloud rentals, making it a financially astute decision for ongoing projects.

Enhanced Data Privacy and Security: Keeping Your Innovations Safe and Confidential

In an era where data breaches, intellectual property theft, and stringent regulatory compliance (like GDPR or HIPAA) are paramount concerns, the security and privacy advantages of a local GPU server are absolutely critical. This is perhaps one of the most compelling reasons for organizations and individuals dealing with sensitive information or proprietary algorithms to choose an on-premise solution.

Unrivaled Local Control: Your sensitive data, proprietary AI models, and confidential research remain entirely within your physical control. They reside securely within your own infrastructure, behind your own firewalls and security protocols. This dramatically reduces the inherent risks of data breaches, unauthorized access, or compliance issues that can arise when data is stored and processed on third-party cloud servers, where you have less direct oversight.Minimized Exposure to External Threats: By keeping your data and computations local, you significantly reduce the need for constant data movement between your environment and external cloud providers. Fewer data transfers inherently mean fewer points of vulnerability and a smaller attack surface, strengthening your overall security posture against external threats. This direct control ensures your most valuable assets are always under your watchful eye.

Unparalleled Performance and Responsiveness: Unleashing True AI Power

One of the most immediate and impactful benefits of a local GPU server is the sheer performance and responsiveness it offers. When your computing power is on-premise, you experience:

No Queuing: The frustration of waiting in line for available cloud resources becomes a thing of the past. You have immediate, dedicated access to your computing power precisely when you need it.Zero Internet Lag: All computations occur locally, eliminating any latency or slowdowns that can plague internet-dependent cloud connections. This is particularly critical for iterative prototyping, fine-tuning, and real-time inference where every millisecond directly impacts development speed.Consistent Power: Your AI models run without the threat of interruption from network fluctuations or contention with other users on shared cloud infrastructure. This translates to pure, uninterrupted AI processing power, allowing your training runs to complete efficiently and reliably.

Maximum Flexibility and Customization: Tailoring Your AI Environment

A local server grants you an unparalleled degree of control over your computing environment:

Hardware Control: You have the freedom to select and configure the exact hardware components—from the number and type of GPUs to RAM, storage, and CPU—that perfectly align with your specific deep learning tasks and budget. This allows for highly specialized setups optimized for your unique needs.Software Environment: You can meticulously set up and customize your entire software stack, including the operating system, drivers, AI frameworks (like TensorFlow or PyTorch), and libraries. This freedom from cloud provider limitations or pre-configured images enables deep optimization for unique and cutting-edge workflows.

Reliability and Predictable Operations: Peace of Mind for Critical Projects

For critical AI workloads, predictability is key, and a local server delivers just that:

No Spot Instance Shutdowns: Cloud “spot instances,” while often cheaper, come with the risk of unexpected shutdowns by the provider. A local server guarantees continuous operation for your crucial training runs, preventing lost progress and wasted time.Full Control Over Maintenance: You dictate when and how to perform system maintenance or updates, ensuring that your vital AI workloads are never interrupted by unforeseen actions from a third-party provider.

Hands-On Learning and Experimentation: Deepening Your Expertise

For those looking to truly master the intricacies of AI development, a local server offers an invaluable educational experience:

Deeper Understanding: Owning and managing your hardware provides a hands-on opportunity to learn about system administration, hardware optimization, and the fundamental workings of AI workflows.Unrestricted Experimentation: You can freely experiment with different hardware configurations, driver versions, and software stacks without incurring additional costs or worrying about impacting a shared environment. This fosters a deeper understanding and encourages innovative problem-solving.

“We’re seeing innovative companies recognize the need and engineer solutions specifically to address the cloud’s limitations for many businesses,” says Mr. Dhiraj Patra, a Software Architect and certified AI ML Engineer for Cloud applications. “The ability to have dedicated, powerful GPU workstations on-site, like the Brainy workstation with its NVIDIA RTX 4090s, provides that potent combination of performance, cost-effectiveness, and data security that is often the sweet spot for SMBs looking to seriously leverage AI and GenAI without breaking the bank or compromising on data governance.”

Experience Brainy Firsthand: The Test Model Program

To give developers, researchers, and AI builders a chance to experience the power of Brainy before committing, Autonomous Inc. has just announced that their sample of Brainy, the supercomputer equipped with dual NVIDIA RTX 4090 GPUs are now open for testing, giving a fantastic opportunity to see firsthand how your models perform on this supercomputer.

How the Test Model Works:

Brainy functions as a high-performance desktop-class system, designed for serious AI workloads like hosting, training, and fine-tuning models. It can be accessed locally or remotely, depending on your setup. Think of it as your own dedicated AI workstation: powerful enough for enterprise-grade inference and training tasks, yet flexible enough for individual developers and small teams to use without the complexities of cloud infrastructure.

Simply by clicking the “Try Now” button and filling a form on Autonomous’ website, the testing will be ready within a day. This hardware trial program allows participants to book a 22-hour slot to run their inference tasks on these powerful GPUs. Whether you’re building AI agents, running multimodal models, or experimenting with cutting-edge architectures, this program lets you validate performance on your own terms—with no guesswork. It’s a simple promise: use it like it’s yours—then decide.

In conclusion, a local GPU server like Autonomous Inc.’s Brainy is more than just powerful hardware; it’s a strategic investment in autonomy, efficiency, and security. By providing a private, predictable, and highly customizable environment, it empowers AI professionals to iterate faster, safeguard sensitive data, and ultimately accelerate their journey in the exciting world of deep learning and AI innovation.

Availability

Brainy is available for order, making enterprise-grade AI performance accessible to startups and innovators For detailed specifications, configurations, and pricing, please visit https://www.autonomous.ai/robots/brainy.

About Autonomous Inc.

Autonomous Inc. designs and engineers the future of work, empowering individuals who refuse to settle and relentlessly pursue innovation. By continually exploring and integrating advanced technologies, the company’s goal is to create an ultimate smart office, including 3D-printed ergonomic chairs, configurable smart desks, and solar-powered work pods, as well as enabling businesses to create the future they envision with a smart workforce using robots and AI.

View original content to download multimedia:https://www.prnewswire.com/news-releases/unleashing-ai-potential-the-power-of-your-own-local-supercomputer-302466954.html

SOURCE Autonomous Inc.

Continue Reading

Technology

Solarsuns investment Guild Launches Fast-Track Program for Beginners Led by Maverick Preston

Published

on

By

Solarsuns investment Guild, under the strategic guidance of founder Maverick Preston, has introduced a fast-track learning pathway to help new investors build foundational knowledge quickly and systematically.

LOS ANGELES, May 28, 2025 /PRNewswire-PRWeb/ — Solarsuns investment Guild has officially launched its “Beginner Fast-Track Program,” a structured onboarding path tailored for first-time investors. Spearheaded by founder Maverick Preston, the program is designed to address the growing demand for accessible, time-efficient investment education without compromising cognitive depth or decision quality.

The Fast-Track Program condenses critical foundational topics—including market basics, behavioral finance principles, and risk-awareness frameworks—into an accelerated five-day module set. Learners are guided through a sequenced journey that includes short-form lessons, real-world scenarios, reflection checkpoints, and strategy primers.

“Many new investors feel overwhelmed by the volume of information and the pressure to perform quickly,” said a curriculum director at Solarsuns investment Guild. “This program, shaped by Maverick Preston’s educational vision, ensures that speed never comes at the cost of clarity or structure.”

What distinguishes the Fast-Track Program is its blend of pace and rigor. While it shortens the time-to-competency for new users, it remains grounded in Solarsuns investment Guild’s cognitive-first learning model. Each module focuses on shaping how learners think about investment decisions, rather than simply telling them what to do.

The program also includes a dedicated “First 100 Days” support schedule, offering new users progress tracking tools, milestone reviews, and access to curated content playlists based on common early-stage investor challenges.

To reinforce learning retention, each participant receives customized prompts after completing key modules, encouraging them to reflect on cognitive shifts and behavioral biases. This approach is consistent with Solarsuns investment Guild’s broader mission of building independent thinkers equipped with long-term frameworks.

The Fast-Track Program is available in both self-paced and guided modes, allowing learners to choose between solo progress or structured cohort-based learning with weekly check-ins. In the first month of release, early participants will also gain access to exclusive onboarding mentorship circles.

This launch represents another step in Solarsuns investment Guild‘s ongoing efforts to eliminate barriers to entry in investment education. By combining thoughtful pacing with cognitive discipline, the platform continues to set a benchmark for scalable, intelligent financial learning.

For full details about the Beginner Fast-Track Program or to enroll, visit the Solarsuns investment Guild website.

Disclaimer:

The information provided in this press release is not a solicitation for investment, nor is it intended as investment advice, financial advice, or trading advice. It is strongly recommended you practice due diligence, including consultation with a professional financial advisor, before investing in or trading cryptocurrency and securities.

Media Contact
Madison Carter, Solarsuns, 1 404-227-8768, service@solarsuns.com, https://solarsuns.com/

View original content to download multimedia:https://www.prweb.com/releases/solarsuns-investment-guild-launches-fast-track-program-for-beginners-led-by-maverick-preston-302465143.html

SOURCE Solarsuns

Continue Reading

Trending