Professional Profile



Salt Stack, Python, PowerShell, Elasticsearch, AWS Cloud, data center migrations, large scale environments, Windows Server Core, Linux (Ubuntu), IIS, Active Directory

DevOps, Cloud Computing, Open-source software,  Platform Engineering & IT Operations

Enterprise Data Center Technologies
Hyper-V, iSCSI storage systems and F5 LTM


Automation & Configuration Management
* Automated production code and tool deployments with SaltStack, Python and PowerShell
* Developed custom Python modules to extend automation framework for monitoring, alerting and validation

Cloud and Data Center
* Key contributor in several complex data center migrations, build-outs and migration to AWS cloud
* World travel for operation center infrastructure build-outs


* Contributed to open-sourced projects, such as the SaltStack automation/CM framework

* Automated several production Elasticsearch cluster upgrades (Major and Minor)

* Led security vulnerability management and remediation, which directly contributed to the success of annual PCI certifications

IT Ops Support & Maintenance
* Provided top tier support for AWS Cloud environments, active/active data centers and production maintenance with 100% uptime

Data Mining Project


Technology Highlights
  • Python 3: Program developed for data mining and data processing
  • Elasticsearch: Distributed database
  • Kibana 5: Used to visuzlize data until a UI is developed

Data mining. Programmatically correlate massive data sets (millions) from several external API sources. Inherently decrease manual intervention and productivity up to 90%.

About the Project
Small modules have been developed to maintain integrity of this project. A sequence of modules, classes and definitions are executed to control the sequence in which data is processed. This entire project is written in Python 3 from scratch. Using a Dell PowerEdge

All data is written to Elastics distributed database used for offline processing. The use of Elastics indices, aliases, document types and mappings makes large data sets easily searchable.

Used to visualize data and allow others to interact with the system.

1) Data Collection
Pull massive data sets from several API endpoints and write to an Elasticsearch cluster. The raw data is slightly modified as it flows through the system, making it easier to work with.

2) Data Reduction
Uniqueness is found between complex data sets in an effort to minimize resource consumption. This data can be referenced at a later time, without the need for reprocessing.

3) Data Correlation and Analysis
A set of rules are defined programmatically, outlining how each fragment of data should be found, translated, correlated, standardized then processed. This step is critical to maintain data integrity.

Software Defined Storage & Xen Server


*Screenshots are not actual. Environment was purged before it could be documented.

Technology Highlights
  • ScaleIO: Open Source Software Defined Storage
  • Xen Server: 4 Node high availability virtualization cluster
  • QNap: iSCSI Cluster Shared Volumes

Build a private cloud infrastructure on commodity hardware

About the project
Using a Dell PowerEdge C6220 appliance, a physical 4 node virtualization cluster was built on software defined storage, creating hyperconverged infrastructure. The use case that defined this project, was the need to support individual user VMs and virtualized clusters serving RDP over HTTPS.

Xen Server high availability cluster uses iSCSI CSV's (cluster shared volumes) providing auto-failover during cluster maintenance or service interruption. Software defined storage volumes sit under Xen Server, storing virtual machine and other types of data. This enables simple use of snapshots to backup, restore and remount data from specific points in time.

Longer term plan is to implement OpenStack and programmatically provision resources using its API's.

Automated VM & Asset Provisioning


Technology Highlights
  • Hyper-V & VM Provisioning
  • Active Directory
  • Microsoft Remote Desktop Services (RDS)
  • XML Meta data
  • Managed VM Reboots
  • PowerShell/CredSSP

Dynamically provision and configure infrastructure based on parameters defined by an end user. Support dynamic provisioning of the following:

    - Abstract deployment configuration into server roles
    - Active Directory Domain one to many nodes
    - RDS clustered or stand-alone
    - VM Hardware profile values
    - Deploy VM from ISO or Template
    - Dynamic or static IP assignment
    - Windows license activation

About the project
XML Meta Data
Environment assets to be provisioned are defined by the user in a meta data file. It's then ingested as a PowerShell hash table and validated prior to execution to ensure deployment integrity.

VM Provisioning
VM's can be provisioned from an ISO or VM template referenced in the meta data file. An unattend file is dynamically generated to issue uniqueness to each VM at the point of creation. Clients are automatically joined to its respective AD domain, it's object is placed in its designated AD Organizational Unit and DNS client list is set

RDS Cluster
Supports clustered or stand-alone Connection Broker deployment scenarios

Active Directory
Provision 1 or many AD/DNS servers. Logic will determine how to handle this setup.

Used to manipulate AD resources and handle multi-hop authentication.

Orchestration and Managed Reboots
Each attribute defined in the meta data file drives which assets will be provisioned. This program will dynamically identify dependencies and prerequisites to ensure a successful deployment each time.

With a single initial remote PowerShell session, several reboots are managed throughout the process by writing a function that monitors WinRM online/offline connectivity to ensure each provisioning step picks up where it left off before it's system reboot.

Data Mining Proof of Concept


Technology Highlights
  • PowerShell
  • External API integration
  • Flat file database
  • PS Object export
  • Custom Indexing Function
First generation of the data mining project outlined above. Store and process large data sets using flat files, no backend database and minimal compute resources.

About the project
PowerShell is the sole technology used to obtain  large data sets form external API's, then later processed and correlated with other data sets.

1) Storing Data in Flat Files
Data is pulled from external sources via API and HTML scraping and stored into small pieces in flat files to be referenced at a later point outside of built-in memory.

2) Exporting PowerShell Objects
PowerShell objects are exported/imported at different points of the data processing to alleviate memory bottlenecks.

3) Indexing
Indexing module was created to effectively search for fields within hundreds of thousands of flat files, making search effective and non-redundant.

About Me

I didn't get to where I am by following an expected path. I've followed my instincts, while others sit back and wait for life to happen. I'm ambitious, resourceful and a catalyst for change with an entrepreneurial spirit. I thrive on change that disrupts and rewrites rules. I continue to provide critical solutions for some of the world's most innovative and demanding companies, delivering expectation-exceeding results.


New York City


Portuguese (Brazilian)

Regularly find new things to learn and challenging myself


Run the annual Tunnel to Towers 5K run in memory of 9/11

Train my dog to do odd tricks, such as touch a treat w/out eating it, balance a glass of wine on her head and to sit like a lady

Try different cigars and single malt scotch whiskeys

Make & Build

Latte art

Wooden furniture

Travel For a New Perspective

Brazil - Amazon Rainforest, Morro de São Paulo, Salvador, Maraba

Portugual - Health retreat

France - Monaco, Nice, Marseille

Spain - Barcelona

Ireland - Dublin

Favorite Books

The Gifts of Imperfection
by Brene Brown

Rich Dad Poor Dad
by Robert Kiyosaki

by Malcolm Gladwell