• Apply To Contribute To AlleyWatch
    • Write for AlleyWatch
  • Tell Us About Your Startup
  • Email Signup
  • Advertise on AlleyWatch
AlleyWatch
  • Business
  • Startups
  • Funding
  • Women in Tech
  • NYC Tech
No Result
View All Result
  • Business
  • Startups
  • Funding
  • Women in Tech
  • NYC Tech
No Result
View All Result
AlleyWatch
No Result
View All Result
Home AlleyTalk #NYCTech

This NYC Startup Raised $1.7M to Fuel R&D for IoT Applications

AlleyWatch by AlleyWatch
Share on FacebookShare on Twitter

Reality_AI.001

When you are in the research and development phase you need to figure out what works and more importantly, what doesn’t work.  Reality AI allows you to track and detect specific events while building connected devices in a variety of applications.  The company, originally designed for military use, is an engineer’s best friends when it comes to building IoT applications.

AlleyWatch spoke with cofounder and CEO Stuart Feffer about the company and about their first round of funding

Who were your investors and how much did you raise?

Our investors were primarily angels and family offices. We also had participation from TechNexus Venture Collaborative, a firm that works with corporate innovators and startups, based in Chicago. This was our Seed investment round.

Tell us about your product or service.

Reality AI is an AI-based signal processing engineer. Our product is used by companies creating connected devices and equipment products (industrial equipment, wearables, automotive components) that are instrumented with sensors and signals. Reality AI offers an application for R&D Engineers working to develop these products to create software that detects specific events and conditions in vibrations, sound, accelerometery, electrical signals, imagery, LiDAR, and remote sensing.

What inspired you to start the company?

The core technology behind Reality AI was originally developed for the US military and intelligence community. With the spread of ubiquitous sensors and the Internet of Things, we saw an opportunity to make these very powerful tools available to commercial and industrial customers and enable rapid development of powerful sensing applications in the Internet of Things.

How is it different?

Three things that are different about Reality AI:

1- We are not based on Deep Learning. We have our own approach to machine learning on sensors data that is grounded in the fundamental mathematics of signal processing. That means that we are more accurate and require much less data than deep learning on problems where our approach is a good fit.

2- Software-based detectors and classifiers built with our technology are suitable for real-time processing at the “edge”. Many of our customers can run their trained classifiers and detectors in firmware on inexpensive microcontrollers – no special hardware required.

3- We are focused on enabling other people’s products. Our customers are generally the R&D groups working on new products, and we deliver a solution that they can incorporate into their own hardware and software designs at a reasonable cost.

What market you are targeting and how big is it?

Our market is companies making devices instrumented with sensors – potentially as large as the Internet of Things itself. But for now we are primarily targeting industrial, wearable/consumer product, and automotive uses.

What’s your business model?

Our tools are available on a SaaS subscription model for use during R&D. When a company ships a product or service using our technology, there is a per-device fee (for use embedded in firmware) or usage charges (if using our cloud API).

stuart feffer reality ai quote.001

Why is AI well suited for use with sensors and signals?

Where we really have an advantage is for sensor data collected at higher sample rates, double-digit Hz on up to kHz and mHz sample rates. There are lots of other tools that work very well with slower-sampled sensors (like a temperature or pressure reading once or twice per second). But for sensor inputs that you might think of as resembling a wave-form – like vibration, sound, etc – most machine learning approaches have a lot of trouble. We’re able to work very well with those kinds of physical-world, high sample-rate signals without resorting to the overhead of Deep Learning.

What was the funding process like?

Any founder who says their funding process was quick and easy is probably lying about it. It took us longer than we expected, but that probably says more about our mistaken expectations than anything else. It was a grind at first. But once we hit critical mass we filled the rest of the round very quickly and wound up oversubscribed.

What are the biggest challenges that you faced while raising capital?

We have a very technical product that is based on some very sophisticated math and science. Many investors just didn’t have the ability to diligence it. And in our early days, we didn’t have customers yet either for external validation.

What factors about your business led your investors to write the check?

I think the ones who wrote the check early saw the vision and were willing to make a leap of faith on the technology and the team. We’re an experienced bunch – not the first startup for either co-founder, each with a good exit under our belt – and I think that may have helped as well.

What are the milestones you plan to achieve in the next six months?

The main thing for us right now is customers – continuing to add new ones and making sure the ones we have are successful. We’re also going to be adding a bunch of new features to the product over the next six months, mainly around supporting anomaly detection in sound and vibrations.

What advice can you offer companies in New York that do not have a fresh injection of capital in the bank?

Customers are way more valuable than investors. Focus on them, and the investors will follow.

Where do you see the company going now over the near term?

Better product, more customers, really zeroing in on our key use cases, and building this into a business that can go the distance.

What’s your favorite rooftop bar in NYC to unwind?

The bar at the Harlem Yacht Club on City Island. Porchtop, not rooftop.

screenshot_realityai_tool

Tags: Artificial intelligencepredictive analyticsR&DSensorsStuart FefferTechNexus
Previous Post

4 Legitimate Reasons to Fire a Toxic Client

Next Post

Tips From Tai Tran, Apple’s Social Media Prodigy

Next Post

Tips From Tai Tran, Apple’s Social Media Prodigy

ABOUT ALLEYWATCH

ABOUT US
ADVERTISE
EDITORIAL GUIDELINES
LEGAL
PRIVACY
TERMS OF USE

CONTACT

CONTACT US
ADVERTISE
TIPS
WRITE FOR US

CHANNELS

NYC VC
NYC TECH EVENTS
NYC TECH NEWS
NYC STARTUPS
NYC COWORKING
TECH DIRECTORY

© 2023 AlleyWatch | All Rights Reserved | Proudly Made for NYC

No Result
View All Result
  • Home
  • Startups
  • Funding
  • AlleyTalk

© 2023 AlleyWatch | All Rights Reserved | Proudly Made for NYC

You are seconds away from signing up for the hottest list in New York Tech!

Join the millions and keep up with the stories shaping entrepreneurship. Sign up today.

Close this popup