#or should we even conceptualize dx at all??
Explore tagged Tumblr posts
honeyandpumpkins · 8 days ago
Text
gotta say, a lot of you complaining about psychiatry/psychology are complaining about a field that is 20 years dead, and you seem to have no idea where the field is now
4 notes · View notes
piratesexmachine420 · 3 months ago
Text
#incredibly fucked up#however#maybe I *would* consider brainfuck low level#conceptually it's about as close as anyone can get to programming on a turing machine#and for a hardware implementation it seems relatively easy to make a one-to-one conversion#<- largely inexperienced on all fronts of what she's saying so yknow take all that with a big grain of salt
Okay first before I respond, I'd like to share a much better-written essay on basically the same topic:
Reading that (↑) is what got me on this path of thinking in the first place. Also, I'd like to say explicitly that I'm not trying to be snobby or dismissive wrt to what you said, in case I didn't control my tone properly. Just trying to have a conversation, and I have a lot of thoughts! :)
Anyways, back to what you said: it's subjective, right? "High-level" and "low-level" are not rigidly defined terms--nor for that matter is "abstract".
We both, I'm sure, agree that BF is minimal--the formal specification is short, the list of instructions is short, the architectural requirements are short and easily mapped onto most machine languages. This is, of course, the point. Brainfuck was designed to be as easy to compile as possible--on the machines of then and today.
But I'd still argue that it's "abstract" -- in the sense that while it's easy to translate between BF and assembly/machine code, there's nothing in BF that enforces how this should be done.
That, in my mind, is the difference between a "low-level" and a "high-level" language: high level languages leave the implementation stack (that is, the entire chain from compiler/assembler to linker to OS to firmware to CPU to individual logic gates) room to interpret the action(s) you request; low level languages leave little to no room for such interpretation.
For example, take the '>' (increment data pointer) command. How might we translate this to a machine instruction? If our compilation target is an 8086-based computer, it would make sense to use register DX as our data pointer, and to increment it with INC DX (0x42). We don't have to, though--and this is the important part--we could just as easily use a different register for the data pointer, or even store it in main memory; we could increment DX with ADD DX, 1 (0x8201); we could increment DX by any other nonzero integer (and leave empty padding between cells, or simply use a different addressing mode for memory access); we could use an 8087 FPU instead of the onboard ALU to increment the data pointer; we could use a table in memory to fetch the new data pointer value (C equivalent: data_ptr = inc_data_ptr[data_ptr]); we could decrement the data pointer instead of incrementing, and vice versa for '<'; we could do any number of things!
Things are made further complicated by... other architectures. BF is portable, after all! Maybe our target architecture doesn't have registers. Now it's even less obvious where we should put the data pointer! Or maybe we're targeting a transport-triggered architecture. Which execution unit(s) should we use to implement our data pointer? Do we have a hardware data pointer? Do we have registers? These questions don't have clear-cut answers!
Other questions we'll have to answer range from "how big is a cell, anyway?" (is it a byte? a halfword? a word? a doubleword? a float?) to "what kind of datatype are we even doing math on here?" (is it a binary 2s-complement integer? 1s-complement? tagged int? float? BCD? is our machine even binary-based? [maybe we're in the Soviet Union in the late 70s and trying to implement BF on a Setun computer with a balanced ternary number system!]) to "how should we handle overflows?" (ignore? crash? something else?) to "what does our runtime look like?" (e.g. how do we return control to the OS? do we have to relinquish it willingly? do we have to initialize memory ourselves? do we have to initialize registers ourselves?) and beyond.
In other words, while there may often be an idiomatic translation, there is never a "correct" translation. BF only specifies is that there is a data pointer, and that '>' will make it point to the "next" cell.
Compare that, then, to a language I would consider "actually low-level": z80 assembly. If I give a z80 assembler the instruction INC HL, it will only ever emit the machine instruction "0x23", and the z80 processor, upon fetching and decoding this instruction, will only ever take the value in the HL register, increment it in the ALU, and put it back in the HL register.
Furthermore, the HL register is real--you can put a z80 CPU under a microscope and look at it, unlike x64 registers (which are merely abstractions for the actual details of the processor's register file--any given physical register could at any time be 'the DX register', for example) or the BF data pointer, which as stated above could be any number of things.
If I write a program in z80 assembly, I will know exactly how many clock cycles each step will take. If I write a program in BF, or x64, I will have no idea. It depends. It depends on so many things.
This, in my mind, is what makes BF (and x64 assembly) "high-level"--there's no control over what the execution stack is actually doing, only suggestions for the general shape. They're crude suggestions, don't get me wrong, but they're suggestions.
That's what drives me up the wall about x64. It used to be low-level, way back in the 80s, but over years and years of changes and refinements it just isn't anymore. Your PC's CPU isn't really an x64 processor--it's just pretending to be. It doesn't execute x64 machine code any more than it 'executes' JVM bytecode.
That's weird, right? I think it's weird. That's my real point.
Anyways, one final note on BF being easy to implement in hardware. This is true, but I don't think it's relevant. Any high level language could be run entirely in hardware given sufficient time, money, and mental illness* and likewise any low level language could be executed via an interpreter (or god forbid transpiler or decompiler).
What matters here is intent, though that's of course subjective. IMO, BF is intended to be portable, z80 assembly isn't. x64 assembly I would argue is also "portable", in a sense, between the many, many microarchitectures implementing it, all of whom are very different internally. (Look up the Via C3 and the Intel Itanium x86 emulation mode for a real trip)
In the end, none of this really matters. Like I said at the start, there's no objective list of "high-level" or "low-level" languages. Everything is nuanced, and mostly only worth discussing in relative terms. (e.g. "C is lower level than Java, Assembly is lower level than C, etc." instead of "Assembly is low level, Java is high level".)
At the end of the day, I'm just making an observation I thought was worth sharing. Hope I gave you something to think about! :)
*Look. Anyone willing to build a physical processor capable of natively running python is, like, definitionally abnormal. I say that with love. Don't build a python processor though.
x86_64 assembly is a kind of high-level language
186 notes · View notes
martinamwi61-blog · 5 years ago
Text
Rock Music? The Attempt Of An Clarification!
Where MAMM is already well known for its enjoyable, electrical guitar-driven rock programming reminiscent of its fashionable summer camp series , its annual Women Rock ! Sadly, and unlike many of the other entries on the record, time hasn't been type to Yamaha's popular FM flagship. While it spawned plenty of spin-offs within the DX series (most visibly the favored TX81Z and Derrick Could's beloved DX-a hundred), as more and more followers have gravitated in the direction of analogue sounds, the interest in FM synthesis has dropped off. Hear rigorously and you may hear a revival in the making (Night Slugs are positively fans), but FM synthesizers nonetheless don't pull in the form of after-market costs that you simply see hooked up to their analogue brethren. One thing I have to disagree with from the video: nice music might take repeated listenings to acknowledge. Once I listen to a new album, there are some tracks that instantly stick out, that I instantly like, and others that simply sort of mix in. However after I proceed to listen to an album, it is not simply that I'm turning into more familiar with the songs, it is that I'm studying them extra deeply. Oftentimes, my lengthy-standing favourite songs end up coming from those who didn't stand out to me the first time. It was solely after repeated listenings that I grew to appreciate their depth. The principle conceptual strategies of musicmap to achieve a satisfactory equilibrium include grouping closely related genres collectively (sibling genres"), coloration coding a lot larger style teams (tremendous-genres"), and introducing a deeper layer of lesser influential subgenres. Hereby the total quantity of the intermediate or primary genres might be diminished to 234. That is deliberately far from the doable whole quantity of genres (roughly 600 or extra, some sources declare over a thousand) to allow easy orientation and good overview. And sure, it's liked by a giant number of individuals. Could not include you, however many others do. So yes, Koop is common. It does not need a flippin ranking of numbers. Popularity is not based mostly off that. You don't have to grasp kpop or like however at least present some respect to the artist, even if not the followers. We do not need opinions that are impolite. If you wish to say something, please look into the definition of recognition reasonably than rankings. They do not need to be better than Drake or Ed Sheeran to be well-liked. Also this text is somewhat putting kpop down. Don't do that. We don't put down different artists. BMI was primed to take full benefit of pop music's growth within the publish-struggle years. For example, Tennessee Waltz," composed by Redd Stewart and Pee Wee King in 1947, grew to become one of the largest pop hits of 1950 when Patti Web page recorded the song. Tony Bennett loved considered one of his first hits in 1951 with a cover of Hank Williams ' Chilly, Cold Coronary heart." A parallel phenomenon was the mainstream success achieved by white pop performers with covers of rhythm & blues songs. Little Richard co-wrote and originally recorded Tutti Frutti," however Pat Boone had an even bigger Pop hit with his model of the music. Amongst other key composers of the interval had been Chuck Berry (Roll Over Beethoven," Rock & Roll Music"), Otis Blackwell (All Shook Up," Fever," Do not Be Cruel"), and Felice & Boudleaux Bryant (who wrote Bye Bye Love" and All I Have To Do Is Dream" for the Everly Brothers). What are the greatest pop music bands? That's up to you to determine on this listing of pop music artists. Nonetheless, when determining who should be thought of the best pop music artists, you should not be voting on widespread pop music bands. This should only consider the quality of their songs. This listing solutions the questions "who are the perfect pop music bands of all time?" and "who is the greatest pop music musician ever?" If you happen to notice a pop group or pop artist missing, feel free so as to add them to this ballot. It's as much as you to determine who's one of the best pop musician of all time, so choose wisely.
Now here is the complement to the continuing suggestion: find one thing to do this's totally different from the competition in your combine. It would already be nestled in the tracks given to you—a component performed barely more forward of the beat than the others, or a synth sound not quite consistent with the present aesthetic. Or, you might need to vogue such a distinction your self, based by yourself inspiration and inclination. But the significance of this can't be overstated: whereas pop music can sound fairly homogenous, usually it's the tunes with one standout factor that cuts through. Yeah, however as far as Ableton is anxious, I decided to move myself utterly out of it. I merely do not have time to do all the things I want to do. And making artwork is more vital for me than making commercial instruments. That is the primary cause. The other reason is that to run such a giant ship as Ableton, you must make lots of selections which are enterprise choices. I found it more durable and more durable to localize myself in such an surroundings because I've totally different concepts, you recognize. And I can have completely different ideas, because I needn't cater to DJs, songwriters or no matter. I cater to the people who like to hearken to my music, and that is it. The more rapid implication of Jehan's work has not been the human-to-robot present of artwork, however lowering a track to a small set of knowledge points that say something in regards to the song typically, akin to valence" (roughly the glad to sad spectrum) and power" (arousing to soporific). Shortly after finishing his dissertation, www.Magicaudiotools.com Jehan co-based an organization referred to as EchoNest, and the info became a pillar of Spotify's advice methods, determining music similarity and precisely suggesting songs that sound alike. Rytis Mažulis' (b.1961) work is marked by a specific stylistic purity, integrity and symmetry of a musical texture based mostly on a counterpoint techniques (largely canons). The structural isomerism and homogeneity of his music is determined by the composer's makes an attempt to find the mathematical and bodily relations between time, area and sound. In line with this overall minimalist idea, the composer chooses instrumentations consisting from equivalent instruments or voices, and www.goodreads.com the range of expression in his work spreads from ethereal vocal compositions to monstrous hyper-canons for pc-piano. Like Kabelis, Mažulis is also involved in the most subtle micro-interval divisions of pitch and the simultaneous pulsations of mathematically calculated completely different tempos. Somewhat more acceptable to the unprepared listeners, the music of Mažulis seamlessly absorbs them into its lengthy cyclical motion, nearly unchanging in time and house.
1 note · View note
Text
The typing process
Here’s a very bare-bones overview how I typed each of my family members, which should give an idea of how I go about this when typing actual humans. For characters I think I rely on functions more, especially in omniscient characters where we see how they think.
ISFP sister
-almost certainly a feeler based on level of emotion and how she made arguments
-fairly sure of sensor: athletically and artistically gifted, tends to notice things quickly, be aware of aesthetics.
-fit a lot of stereotypes for both ISFJ and ISFP (both have a kind of ‘sweet, likes flowers and tea’ kind of vibe, just more farmer’s daughter vs. hippie vibe in the stereotypes for the two).
-fairly outgoing but probably on the introvert side of things.
-not super organized and kind of messy and cluttered but I hesitate to use that for P/J
-always kind of knew what she wanted to do, but also picked up new interests as they came up and pursued them: this plus how she responded to sensory stuff is what helped me figure out she seemed to use Se/Ni. This is also what cemented her as an ISFP and not ESFP, because her Ni is definitely better than her Te.
-double checked by confirming her feeling was Fi: very much about individuality and doing her thing, didn’t always do a good job of anticipating others’ needs.
ISTP sister
-definitely a thinker, very analytical, extremely unemotional
-I did pretty much immediately jump to Ti-dom for her because she was so clearly not a Te user and because she was obviously an introvert and she fit the Ti-dom/Fe-inf model so well.
-Could not tell Ne vs Se very well initially because she plays a lot of videogames and doesn’t like outside much
-however, shared that kind of Se/Ni weak long-term vision that picks up side things as they come up and actually pursues them with ISFP sister
-upon talking to her about her classes specifically to try and figure out Ne/Se, confirmed she strongly preferred the hands-on to the theoretical
-also somewhat naturally athletic (not all Se users are great at sports because there’s also a lot of rules involved, but they are good at reacting to cues in their environment).
ENFP brother
-initially mistyped as INFP
-kind of typed based on the holistic description - emotional, involved in activist causes, tends to read into things way more than they actually are, constantly speculating but often not actually doing
-textbook idealist (both in life and the NF subtype)
-space case, though this was later complicated by an ADHD dx
-once the ADHD and other stuff was figured out it and because he’s right at prime tert-development age, it became clear he was capable of Te way more than Si
-also he flat-out said “I think I might be extroverted” to me at some point which I don’t think an INFP would
ISTP dad
-probably easiest to type of my family
-very obviously a Ti user: highly analytical for its own sake, tends to leave all planning to my mother
-does that thing where you’re like “why isn’t everyone ready to go” and then when you say “we are” he’s gone off and started something which seems unique to Se (vs. Si or Ni who are ready and then stay read, and Ne which doesn’t notice that no one else is ready to go or even that we are going places)
-also very obviously a sensor: has owned a muscle car as a grown-ass adult man, is one of those people who runs without music because of some kind of mind-body in tune with the fabric of reality nonsense, builds things constantly
-Ti seemed stronger than Se and Ni seemed stronger than Fe hence ISTP.
ISFJ mom
-hardest to type though some of that was me being like “but ISFJ mom is such a stereotype”
-definitely got IxxJ vibes though which is weird because usually J/P isn’t reliable as I’ve said
-introvert, bookworm, organized
-never considered INTJ and only briefly considered ISTJ, it became more an more obvious that she was an Fe user in how she expected people to behave and that beingefficient was more a learned skill than a natural way of thinking (sidebar: now that I’m better at typing I can tell a TJ from an FJ who went to business school, but it can be hard)
-is very clumsy which made me think Ni-dom initially but I think it’s just how she is, it’s not that she’s out of touch with what’s going on in her environment, she just has zero coordination.
-I do still wonder about Ni sometimes to be honest, because she does do a lot of conceptual thinking, but it’s more based in reality and then expands into the possibilities in an Si with fairly well developed low Ne way which makes sense because my parents are old enough to have decent inf function development.
17 notes · View notes
jtenney4-blog · 8 years ago
Text
Process Models for Data Mining
Part I: Data Mining Processes
What are the major data mining processes?
Data mining is the step in the knowledge discovery process in which data patterns are extracted using intelligent methods. Data is mined to assess frequent patterns, correlations, and associations. Frequent item-set and market basket analysis is a common data mining method that allows companies to assess which items their customers likely will purchase together. This knowledge not only helps with forecasting but also provides marketing insight surrounding consumer behavior, promotional strategies, and item placement. One of the main portions of data mining is presenting the data in intelligent, visual, and possibly interactive formats that allow for easy recognition and absorption of key information.
Before data can be mined it must be cleaned up. Data in the real world is often dirty and inconsistent, life is too messy to nicely fit into a spreadsheet at once. Multiple sources of data must often be integrated to have all of the information needed. However, once all of these data sources are pulled together, you do not really need all of it, just portions from each source. The data that is needed for the respective task is then selected. This sample of data is then transformed and consolidated so it is functional to be mined. The data is now ready for discovery. The business goals and objectives as well as the data, its sources, and its purposes must be fully understood so the mining phases are clearly defined and measurable. Mining models, that classify and predict using algorithms, are then built to meet the business needs through the data set. These models are then evaluated, tested, and deployed. New data is always coming into the picture and new patterns are always emerging, so this is a very circuitous process.
Why do you think the early phases (understanding of the business and understanding of the data) take the longest in data mining projects?
As Mike Gualtieri (2013) of Forrester states, coming into an organization and trying to understand the hundreds, thousands, and even millions of tables, objects, sources, and servers could take months. It is pertinent one has a firm grasp on all of these items, how they flow in and out, and how they interact. Bringing in a data scientist to run algorithms for predictive and prescriptive statistics should be done with the pairing of an internal individual who has the firm grasp of the business and data. The person who understands all of the data at an enterprise level and its interactions down to the end user does not have the skill set generally to run all of the advanced analysis and if they do, they do not have the time. Both tasks are so time consuming and ever changing there should be a SME at both ends.
If the business is not understood then the data has no meaning, without an understanding of the data there is no means to address business concerns, goals, objectives, successes or failures. To properly assess the full scope of an organization models must be built, evaluated, and deployed on a level that fully understands the business and data. It would be horrendous to build a plethora of models, test, and deploy these models only to realize two data sources were left out that completely change the whole picture. Which then skews all other data. Further, it would be devastating to build models that answer questions that seem apparent but aren’t fully what the key decision makers need out of the data. At that point, it doesn’t matter how great your models are, if the questions aren’t being answered that are needed or patterns are being ignored because they do not seem relevant to unknown business goals.
List and briefly define the phases in the CRISP-DM process.
The first step in the CRISP data mining process is grasping a firm understanding of the business, its goals, and its performance measures. Prior strategies, current resources, and previous failed objectives must all be assessed to gain a holistic understanding of the organization.
It is pertinent one understands the current state of data management as well as where key players idealize their data. The existing data and its relationships must be collected and assessed. This can further business understanding, which can trigger new data and business relationships in a cyclic manner.
Although business understanding may be the most critical portion of the process, data preparation tends to be the most time consuming. Data has to be cleaned, multiple sources of data must be integrated, the data size should be reduced to only cover the scope of the project, finally the data must be transformed for intelligible access.
Models are strategically devised to achieve project objects. These models are algorithms created to classify, predict, and perform various other analytic functions. These models are then reviewed, the best model is determined, and strategy for testing is put into place. Deployment ensues and a continuous cycle of new data is always incorporated. (Hunter, 2009).
What are the main data preprocessing steps? Briefly describe each step and provide relevant examples.
Data has to first be cleaned up. Missing values are filled in. Binning, regression, and outlier analysis is then completed to clean up any ‘noise’, errors in variance, in the data. Integration of data begins with matching objects and schemas, eliminating redundancies and finding correlations, and solving any conflicts with data communications. Generally, entire data sets are too cumbersome to analyze, so a sample of data is taken from a reduced data set. This consists of compression, defining principles components of data, determining appropriate attribute subsets, and clustering. Data is then transformed. New attributes are created, data is normalized, aggregated, labeled, and formed into a conceptual hierarchy.
How does CRISP-DM differ from SEMMA?
SEMMA and CRISP-DM are superficially parallel. Sample relative to Selection, Explore to Pre-Processing, Modify to Transformation, Model identifies to DM, Assess similar to interpretation/evaluation. In my understanding SEMMA (Sample, Explore, Modify, Model, Assess) is the SAS process that was implemented to complete the knowledge discovery process. CRISP-DM, has the same intent, it has just been refined and is now the industry standard. This most likely lies in the fact that CRISP-DM begins with business and data understanding before considering data preparation. SEMMA begins at the sample, therefore, quite a bit of work must be standardized prior to being able to obtain a reliable sample of the data (Azevedo & Santos, 2008)..
Part II: Data Mining Methods & Techniques
Identify at least three of the main data mining methods.
The Apriori algorithm is used to mine frequent itemsets for Boolean association rules. Hashing, transaction reduction, partitioning data, and samples are used to reduce data for use in this method.
The Frequent Pattern Growth Approach also mines frequent itemsets but it constructs an extremely compressed transaction database and focuses on fragment growth.
Vertical Data Format (Eclat) transforms data sets of transactions into vertical and horizontal rows. The Apriori algorithm is then used to find significant intersections in the data.
Give examples of situations in which classification would be an appropriate data mining technique.
Credit Approval
Fraud detection
Performance prediction
Medical Dx
Target Marketing
Manufacturing
Criminal profiling
The Federal Bureau of Prisons has scoring guidelines based upon standardized testing that classifies individuals to determine the probability of them becoming a re-offender.
Give examples of situations in which regression would be an appropriate data mining technique.
Regression is used to help fill in missing values and reduce data sets during data cleansing (Berson, 2012). Regression can also be used to answer numerous predictive questions as well as for monitoring and forecasting. (Major, 2013).
How much will a given customer spend during a sale
What is happening now
What might happen
How much will our sales be this summer?
How much inventory should we store?
What will sales finish at this year?
What stock level will be left at the end of the month?
List and briefly define at least two classification techniques.
Decision Tree Classifiers are flow-chart like structures showing test attributes, class labels, and the outcome of the test. This method is very popular as it requires no domain knowledge or parameter setting, it can accommodate multi-dimensional data, it is very intuitive, and carries high accuracy rates.
Bayesian Classifiers use statistics to predict class membership probabilities. Naïve Bayesian classifiers are high in accuracy and speed and comparable to decision trees.
Generalized Linear Model (GML) is another popular statistical technique that Oracle uses for binary classification and regression. This model supports coefficient statistics and confidence boundaries. Oracle further utilizes Support Vector Machine (SVM) algorithm for binary and multiclass classification (Oracle, 2017).
Rule based classifers generally focus on conflict resolution for IF-THEN functions. This is often done through size ordering, rule-based ordering, default rules, class based ordering, or a decision list.
References
Azevedo, A. & Santos, M.F. (2008). KDD, SEMMA AND CRISP-DM: A Parallel Overview. IADIS European Conference Data Mining 2008. ISBN: 978-972-8924-63-8.
Berson, A., Smith, S., Thearling, K. (2012). An Overview of Data Mining Techniques. Retrieved from
http://www.thearling.com/text/dmtechniques/dmtechniques.htm
Gualtieri, M. (2013). What is a Data Scientist? YouTube. Retrieved from
https://www.youtube.com/watch?v=iQBat7e0MQs&feature=youtu.be
Han, J., Kamber, M., & Pei, J. (2011). Data Mining: Concepts and Techniques, 3rd Edition. Morgan Kaufmann, Waltham, MA. ISBN: 978-0-12-381479-1
Hunter, J. (2009). Data Mining Process using CRISP – Session 2. YouTube. Retrieved from
https://www.youtube.com/watch?v=dJcmOe3_P0E&list=PLB804A810436AFB03&feature=share&index=1
Major, R. (2013). Advanced Analytics and Business Intelligence. YouTube. Retrieved from
https://www.youtube.com/watch?v=oNNk9-tmsZY&feature=youtu.be
Oracle. (2017). Classification, Data Mining Concepts. Retrieved from
http://docs.oracle.com/cd/B28359_01/datamine.111/b28129/classify.htm#DMCON004
0 notes