Connect with us

Technology

Inhouse Launches First AI Assistant for SMBs, Individuals

Published

on

The platform makes legal services dramatically more accessible, affordable and efficient with custom AI supervised by 1K+ attorneys, including a partnership with BakerHostetler.

SANTA MONICA, Calif., Dec. 11, 2024 /PRNewswire-PRWeb/ — Inhouse has officially launched the first lawyer-backed legal AI platform. Tailored to meet the everyday legal needs of individuals and small businesses (SMBs), the platform combines custom AI with a network of over 1,000 lawyers across all 50 states, including the premier law firm BakerHostetler.

During its summer beta testing, Inhouse delivered over 1 million dollars of legal work to 600+ individuals and businesses, from startups to public companies in healthcare, tech, media, cannabis, hospitality, real estate and construction. The company is partnering with Chambers of Commerce across the country, starting in New York.

“In our latest construction project, Inhouse reduced our reliance on lawyers by over 80 percent,” said Dave Latizzori, a commercial and residential developer. “Traditionally, we would engage multiple law firms at different stages of a project, with each touchpoint creating costly bottlenecks. Inhouse’s AI took over the heavy lifting—updating construction agreements, negotiating with subcontractors, providing real-time advice on permitting and drafting lease agreements. The AI applied local Connecticut law and industry benchmarks. We saved over $75,000 in legal fees.”

Inhouse’s AI generates a first draft of the legal work and then enables users to find and share the output with a lawyer with ultra-relevant experience to verify or finalize.

“We partnered with Inhouse to better serve early-stage startups that often find Big Law services beyond their reach,” said Will Chuchawat, Head of Mergers and Acquisitions at BakerHostetler. “Inhouse’s AI efficiently handles their routine legal tasks, allowing our lawyers to focus on high-impact strategic work.”

The lawyer-in-the-loop model also makes it radically more accurate than ChatGPT as Inhouse uses the anonymized feedback data to improve its models.

“The issue with general-purpose LLMs is that they’re trained exclusively on documents alone,” explained Aarshay Jain, CTO, Inhouse. “We realized that improving quality required integrating expert feedback rather than just adding more documents. Our system securely incorporates lawyer input in a way that fully preserves client confidentiality, and no data is ever shared with third parties.”

Inhouse was cofounded by Ryan Wenger, a second-time founder and corporate attorney, and Jain, an AI engineer formerly of Spotify. Ken Friedman, former Deputy General Counsel at LegalZoom, and Scott MacDonell, former Vice President of Marketing at LegalZoom, serve on the advisory board.

“I saw a variety of AI tools being built for lawyers to improve their profit margins, but nothing for legal customers to reduce their costs,” remarked Wenger. “By inserting lawyers into the platform we found a way to safely offer this game-changing technology to the general public. And it benefits lawyers too. They get a constant free stream of relevant clients and get to focus on the kind of work they really care about rather than the boilerplate law.”

Annual subscription plans for Inhouse peak at $189/month for unlimited legal assistance, with a free option available for users needing periodic support.

About Inhouse

Inhouse, an AI-powered legal platform, was founded in Los Angeles in 2023. The company raised an undisclosed round from attorneys, angel investors and Switch VC. For more information on how Inhouse is transforming legal services for businesses, visit Inhouse.so.

About the Founders

Ryan Wenger, CEO of Inhouse, started his career as a corporate attorney before founding WhereTo, an online booking tool that grew to $80M in revenue before its sale to a public company. Aarshay Jain, CTO of Inhouse, was a senior AI engineer at Spotify, and holds a master’s degree in Data Science from Columbia.

Media Contact

Ryan Wenger, Inhouse, 1 561-445-0840, brody@high10media.com

View original content to download multimedia:https://www.prweb.com/releases/inhouse-launches-first-ai-assistant-for-smbs-individuals-302329202.html

SOURCE Inhouse

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Narrative Unveils LLM Fine-Tuning Platform and Rosetta Stone 2.0, Pioneering a New Era of Data Normalization and Custom Model Training

Published

on

By

NEW YORK, Jan. 7, 2025 /PRNewswire/ — Narrative, the industry’s leading data collaboration and commerce platform, today announced a groundbreaking suite of enhancements aimed at democratizing AI development and data utilization. Chief among these is the introduction of a new Large Language Model (LLM) fine-tuning capability, enabling companies of all sizes to customize their own AI models directly on the Narrative platform with unprecedented ease. Additionally, Narrative is thrilled to introduce Rosetta Stone™ 2.0, the next generation of the company’s acclaimed data normalization solution, engineered using the very fine-tuning tools now available to customers.

Narrative Unveils LLM Fine-Tuning Platform & Rosetta Stone 2.0, Revolutionizing Data Normalization & AI Customization

A Collaborative and Accessible Approach to LLM Fine-Tuning
At the core of Narrative’s approach is a “collaboration-first” philosophy. Users can now access a broad range of datasets—from proprietary and partner sources to publicly available and marketplace content—to train and refine their LLMs. This approach fosters an ecosystem where content creators and publishers can monetize their work directly, while businesses benefit from a richer, more diverse data pool to power increasingly sophisticated AI models. By removing technical barriers and simplifying the model-building process, Narrative empowers everyone from non-technical operators to seasoned data scientists to craft bespoke language models with a simple point-and-click interface.

Rosetta Stone 2.0: Next-Generation Data Normalization
A centerpiece of today’s announcement is Rosetta Stone 2.0, an evolution of Narrative’s pioneering data normalization capability. Leveraging Narrative’s new LLM fine-tuning platform, the updated Rosetta Stone model delivers remarkable performance gains and expanded functionality. It not only standardizes data automatically across disparate sources, ensuring seamless compatibility and readiness for training, but it also can serve as a foundational base model for customers looking to extend its core normalization capabilities into their specific domain. From ensuring coherent data formats to tackling complex, domain-specific semantic challenges, Rosetta Stone 2.0 is a flexible, next-level tool designed to accelerate data-driven innovation.

Key Features and Benefits:

Easy, No-Code Model Fine-Tuning:
Users can skip the complex coding, configuration files, and intricate infrastructure setups. Narrative’s platform translates raw datasets into meaningful training material through an intuitive, point-and-click interface.

Rich Data Ecosystem & Monetization Opportunities:
Through Narrative’s marketplace, publishers, content creators, and data owners can directly profit by offering their datasets for model training. Simultaneously, developers can tap into a vast reservoir of high-quality information to train models that align perfectly with their use cases.

Rosetta Stone 2.0 Engineered with Fine-Tuning:
Built using the same LLM customization features now offered to users, Rosetta Stone 2.0 exemplifies the power and potential of the Narrative platform. Its advanced normalization techniques handle complex and heterogeneous data sets, and it can be adapted into a specialized normalization model for industry- or business-specific contexts.

Bring fine tuning to your data
Narrative fine tuning is available anywhere Narrative is available, including in Narrative’s cloud and in your organization’s Snowflake, Databricks, AWS, Azure, or GCP account.

Customizing Rosetta Stone for Your Data

Narrative now gives you the option to tailor Rosetta Stone’s powerful data normalization capabilities so it fits your organization’s unique data and terminology—no major system overhauls required. This means you get more accurate and consistent results by aligning Rosetta Stone with your own industry language and internal structures.

When you’re ready to deploy Rosetta Stone, you can choose from different model sizes to strike the right balance of speed, detail, and cost. Simply pick the option that best fits your team’s priorities and infrastructure.

“The launch of our LLM fine-tuning platform and Rosetta Stone 2.0 marks a pivotal milestone in our journey to democratize AI development. With these offerings, anyone can create, refine, and extend powerful language models, and content creators can finally realize tangible value for their contributions. This is what the future of data and AI collaboration looks like—accessible, flexible, and mutually beneficial for all stakeholders.” –Nick Jordan, Founder, Narrative

For more information on Narrative’s LLM fine-tuning platform and Rosetta Stone 2.0, or to schedule a live demo, visit narrative.io.

View original content to download multimedia:https://www.prnewswire.com/news-releases/narrative-unveils-llm-fine-tuning-platform-and-rosetta-stone-2-0–pioneering-a-new-era-of-data-normalization-and-custom-model-training-302344342.html

SOURCE Narrative I/O, Inc.

Continue Reading

Technology

Fibocom Launches the Fibocom AI Stack to Facilitate On-device AI Deployment with a Fully Manageable Service at CES 2025

Published

on

By

LAS VEGAS, Jan. 7, 2025 /PRNewswire/ — Fibocom (Stock code: 300638), a global leading provider of AIoT solutions and wireless communication modules, announces its brand-new Fibocom AI Stack, an integrated set of hardware, AI tools, AI engines, resourceful AI models, offering a fast-to-deploy solution for industry customers to intelligentize their devices with on-device AI. Laying the focus on device-side AI adoption, Fibocom AI Stack is designed to address the challenges of diverse demands for intelligent transformation across different industries by providing a complete solution to facilitate the deployment of AI models on smart devices.

Key Capabilities

Cross-Platform and Cross-System Flexibility: Leveraging the compatibility with mainstream AI frameworks like TensorFlow, PyTorch, ONNX and MXNet, etc. and the support of multi-tier computing chips and modules, Fibocom AI Stack provides the optimal strategy for device-side AI deployment through the integration of an easy-to-deploy code and accessible AI toolchain for data annotation, model training, fine-tuning, achieving high-accuracy and high-speed inference.Powerful On-device AI Engine: Fibocom AI Stack’s high-performance inference engine ensures efficient operation, enhancing performance while minimizing power consumption. This aspect is crucial for real-world applications where resources are constrained. Additionally, it supports multi-language SDK interfaces, third-party apps as well as hardware accelerator.Simplifying On-device AI Development: Fibocom AI Stack provides an extensive library of AI models (e.g., for audio, CV, LLMs, multimodal LLMs) and provides the value-added model selection and fine-tuning service to help customers to develop industry-oriented models. Therefore, companies of any size can benefit from the manageable service, accelerating on-device AI deployment globally.

The versatility and compatibility of Fibocom AI Stack significantly ease the complexity of on-device AI deployments where many terminal devices are constrained with limited computational power and energy efficiency, making it a master tool for the intelligent transformation in smart retail, autonomous vehicles, smart wearables, robotics. It is worth mentioning that Fibocom AI Stack works across multiple Fibocom AI module and smart module series, supporting the computation power from entry to premium level.

“In the wave of AI-first, we are proudly seeing Fibocom leverages AI innovations and in-depth industry expertise to empower the on-device AI deployment with our game-changer Fibocom AI Stack, enabling businesses to scale smart devices in various industries with high-efficiency, cost-effective approaches,” said Willson Liu, President of Fibocom AI Research Institute. “Looking forward, we are expecting to collaborate with more industry customers to build their smart devices with Fibocom-empowered on-device AI solutions and enable a smart future with AI productivity.”

Explore more about Fibocom AI Stack, welcome to book a meeting with our sales team or visit us at IMC’s booth #10577 in the North hall at CES 2025 from January 7-10.

About Fibocom

Fibocom is a global leading provider of AIoT solutions and wireless communication modules as well as the first wireless communication module provider listed on China A-shares stock market (stock code: 300638). Fibocom offers a one-stop solution for industry customers by integrating wireless communication modules and IoT solutions. With over two decades of engagement in M2M and IoT communication technology and extensive expertise, we are capable of bringing reliable, convenient, secure and intelligent connectivity service to every industry, enriching smart life with a perfect wireless experience. Fibocom’s product portfolio ranges from cellular modules (5G/4G/3G/2G/LPWA), automotive-grade modules, AI modules, android-smart modules, GNSS modules and antenna service. Together, we aim to empower digital transformation across industries such as ACPC (Always Connected PC), mobile broadband, smart retail, C-V2X, robotics, smart energy, IIoT, smart cities, smart agriculture, smart home, telemedicine, etc.

Find out the latest news at www.fibocom.com, and follow us on LinkedIn /X /Facebook /Youtube.

Media Contact: pr@fibocom.com

View original content to download multimedia:https://www.prnewswire.com/news-releases/fibocom-launches-the-fibocom-ai-stack-to-facilitate-on-device-ai-deployment-with-a-fully-manageable-service-at-ces-2025-302344198.html

SOURCE Fibocom Wireless Inc.

Continue Reading

Technology

Manifest Achieves FedRAMP® High Authorization Through Palantir Technologies

Published

on

By

WESTPORT, Conn., Jan. 7, 2025 /PRNewswire/ — Manifest, the industry leading software bill of materials (SBOM) and artificial intelligence bill of materials (AIBOM) management platform, today announced that it has achieved FedRAMP® High authorization through Palantir Technologies’ FedStart program. Manifest can deliver its industry-leading SBOM and AIBOM management platform to government customers in a FedRAMP®-authorized environment.

FedRAMP® High Authorization solidifies Manifest’s status as a trusted data platform to deliver SBOM and AIBOM management

FedRAMP® High authorization provides assurance that Manifest can handle the most sensitive data across the public sector including law enforcement data, healthcare data, or other mission-critical data.

“The ability to manage SBOM and AIBOM data in Manifest brings security and visibility into mission critical applications and models,” said Marc Frankel, CEO, Manifest. “With FedRAMP® High authorization, Manifest will continue to work closely with the Federal Government and its partners to bring SBOM and AIBOM management into their missions. We are honored to support our government customers and to help them secure software and AI supply chains.”

Manifest partnered with Palantir to achieve FedRAMP® High authorization. “We established Palantir FedStart with the aim of empowering companies to deliver solutions to the federal government quickly,” said Ali Monfre, Palantir’s FedStart Program Lead. “It is extremely exciting to see Manifest achieve this milestone through our program, an affirmation of our mutual ongoing commitment to ensuring secure and expedited access to cutting-edge technology for the government users who need it.”

Manifest launched in December 2021 in the wake of the catastrophic Log4shell vulnerability. The world’s most important industries – financial services, healthcare, defense, manufacturing – were left without insight into the open-source and third-party software components in the software they had procured, and Manifest has set out to deliver quantified software supply chain risk to critical infrastructure categories and governments.

Manifest has a proven history of success with various federal agencies, including at the Department of Defense (DOD) and the Department of Homeland Security (DHS). With the successful achievement of FedRAMP® High authorization, Manifest is well-positioned to continue redefining the future of software and AI security in the public sector.

For more information on the Manifest FedRAMP® High authorization, click here. Manifest will host a webinar on February 4, 2025 to discuss delivering software supply chain security to Government. Click here to register.

For more information on Manifest and its solutions, please visit https://www.manifestcyber.com.

Media Contact:
Mike McDonel
mike@manifestcyber.com
(443) 537-9123

View original content to download multimedia:https://www.prnewswire.com/news-releases/manifest-achieves-fedramp-high-authorization-through-palantir-technologies-302343468.html

SOURCE Manifest Cyber, Inc.

Continue Reading

Trending