#3498db
Explore tagged Tumblr posts
Text
马斯克为美国也是拼了,设奖半亿,每人奖励47刀!
新闻概要 埃隆·马斯克及其创立的美国政治行动委员会(PAC)于2024年10月1日启动了名为”签署请愿书,捍卫修正案”的活动。该活动旨在鼓励选民签署支持美国宪法第一和第二修正案的请愿书。活动将持续到10月21日,目标是在六个摇摆州收集一百万个签名。每成功推荐一位选民签署请愿书的参与者将获得47美元的奖励。 背景 这项活动的背景是美国政治日益极化和选民参与度下降的趋势。马斯克作为硅谷的风云人物,选择通过金钱激励的方式来促使选民关注宪法权利,并参与到政治活动中。这不仅是对言论自由和持枪权的支持,更是一次前所未有的选民动员行动。 在过去的几次选举中,摇摆州的选民投票意向往往决定了选举的结果。通过这一计划,马斯克希望激励选民们表达自己的立场,参与到政治进程中来。 主要影响分析 马斯克的激励措施在社会各界引起了广泛关注。47美元的奖励不仅仅是一个数字,它背后蕴含了深刻的政治意义。首先,这…
#000000#0000ff#2c3e50#3498db#47美元奖励#900000#美国政治#e8f0fe#选民参与#金钱激励#言论自由#马斯克#fff3f0#PAC#公民责任#公众舆论#宪法修正案#持枪权#摇摆州#政治动员#政治活动#民主参与
0 notes
Text
O-Metric: Simplifying ARC Complexity with Grid Analysis
The Abstraction and Reasoning Corpus (ARC) challenges AI to exhibit abstract reasoning and adaptability, testing pattern recognition, transformations, and generalization. The o-metric measures task complexity based on grid size, aiding strategy.
1. Introduction: The ARC Challenge The Abstraction and Reasoning Corpus (ARC), developed by François Chollet, represents a paradigm shift in artificial intelligence testing. Unlike traditional benchmarks that focus on specific tasks or domains, ARC challenges AI systems to demonstrate genuine abstract reasoning and adaptability—core components of human-like intelligence. ARC presents a series…
#2980b9#333#3498db#analysis#arc#challenge#ddd#e0f7fa#explanation#f0f0f0#f9f9f9#ffeeba#fff3cd#intelligence#message#note#o#o-metric#ometric3#ometric4#reveal
0 notes
Note
Do you have different favorite colors?
We all agree on a True Favorite (purple), but after that our color preferences differ to various degrees. The four of us generally agree overall on what colors we like more than others, but we can disagree on the specifics.
What you might find interesting is that we also each have a specific color associated with us: Mikaela is red, Lily is blue, Violet is purple, and Ciri is green. These definitely influence our color preferences too (or the other way around? Who knows). For example we tend to prefer darker blues, but Lily (whose color is #3498db) likes lighter shades of blue significantly more than the rest of us~
Thanks for the question!
#plurality#dissociative plurality#asks#Qyriad#hopefully that answer makes sense?#if it doesn't or you're still curious feel free to send more asks
2 notes
·
View notes
Text
تغییر شکل و جلب توجه با کلاسها و استایلهای بوتاسترپ
مقدمه: بوتاسترپ یک چارچوب طراحی و توسعه وب است که توسط توییتر ایجاد شده است. این چارچوب با ارائه کلاسها و استایلهای آماده، به طراحان و توسعهدهندگان وب کمک میکند تا به سرعت و به سهولت ظاهر جذاب و کارآمدی برای وبسایتها و وب اپلیکیشنها ایجاد کنند. در این متن، به تغییر شکل و جلب توجه با استفاده از کلاسها و استایلهای بوتاسترپ خواهیم پرداخت.
استفاده از کلاسهای رنگ:
یکی از راههای اصلی برای جلب توجه، استفاده از رنگهای جذاب و هماهنگ است. بوتاسترپ این امکان را به شما میدهد تا با اعمال کلاسهای مربوطه، رنگ برخی از عناصر را تغییر دهید.
htmlCopy code
<button class="btn btn-primary">دکمه اصلی</button> <button class="btn btn-danger">دکمه خطر</button>
استفاده از کلاسهای آیکون:
آیکونها نقش مهمی در جذب توجه دارند. بوتاسترپ از کتابخانه آیکون FontAwesome به خوبی پشتیبانی میکند.
htmlCopy code
<i class="fas fa-star"></i> <i class="fas fa-heart"></i> <i class="fas fa-thumbs-up"></i>
استفاده از کلاسهای تایپوگرافی:
تغییرات در نوع و اندازه فونت میتواند تأثیر قابل توجهی بر خوانایی و ظاهر داشته باشد.
htmlCopy code
<h1 class="display-1">عنوان بزرگ</h1> <p class="lead">متن توضیحات با سبک رهبر</p>
استفاده از کلاسهای جایگذاری:
کلاسهای جایگذاری میتوانند در تغییر موقعیت یا اندازه عناصر کمک کنند.
htmlCopy code
<div class="container"> <div class="row"> <div class="col-md-6">نیمه اول</div> <div class="col-md-6">نیمه دوم</div> </div> </div>
استفاده از کلاسهای انیمیشن:
انیمیشنها جذابیت و زندگی به وبسایت شما اضافه میکنند. بوتاسترپ از کلاسهای انیمیشن برای این کار استفاده میکند.
htmlCopy code
<img src="path/to/image.jpg" class="animated bounce">
استفاده از استایلهای شخصیسازی:
اگر نیاز به شخصیسازی بیشتر دارید، میتوانید از استایلهای شخصیسازی درون تگها استفاده کنید.
htmlCopy code
<button style="background-color: #3498db; color: #fff;">دکمه شخصیسازی شده</button>
استفاده از کلاسهای افزایش و کاهش اندازه:
کلاسهای text- بوتاسترپ به شما امکان میدهند اندازه متن را تغییر دهید.
htmlCopy code
<p class="text-muted">متن با رنگ کم اهمیت</p> <p class="text-uppercase">متن با حروف بزرگ</p>
استفاده از کلاسهای نمادگذاری:
کلاسهای نمادگذاری میتوانند به شما کمک کنند تا المانها را به نمادهایی تبدیل کنید.
htmlCopy code
<span class="badge badge-success">موفقیت</span> <span class="badge badge-warning">هشدار</span>
نتیجهگیری:
با استفاده از کلاسها و استایلهای بوتاسترپ، میتوانید به سرعت و به شیوه زیبا تغییر شکل و جلب توجه در وبسایت یا وب اپلیکیشن خود را اعمال کنید. این ابزارها از شما خواست�� نمیشود که از ابتدا همه چیز را ایجاد کنید و به شما امکان میدهد تا با کمترین تلاش و زمان بیشترین اثر را بر روی ظاهر وبسایت خود بگذارید.
0 notes
Text
Changing SVG Colors with CSS: Vibrant Graphics
Introduction
Welcome to our exploration of the fascinating world of changing SVG colors with CSS. Scalable Vector Graphics (SVG) have become a cornerstone in modern web design, providing a versatile platform for creating vibrant and scalable graphics. In this blog post, we will delve into the techniques and methods that CSS offers to manipulate and enhance the colors of SVG elements, adding a new dimension to your web design toolbox. Whether you're a seasoned developer or just starting with web design, this guide will help you unlock the potential of CSS for creating visually stunning and dynamic SVG graphics.
Understanding SVG Colors
Scalable Vector Graphics (SVG) revolutionized web graphics by providing a format that is both scalable and resolution-independent. Central to the visual appeal of SVGs is their ability to showcase a wide range of colors. Let's delve into the intricacies of SVG colors and how CSS can be harnessed to manipulate them effectively. 1. Hexadecimal Color Codes: SVG supports the use of hexadecimal color codes to define colors. These codes represent a combination of six characters, including numbers 0-9 and letters A-F, providing a vast spectrum of color possibilities. For example, #FF5733 represents a vibrant shade of orange. 2. Named Colors: In addition to hexadecimal codes, SVG allows the use of named colors for simplicity and ease of use. Common names like blue, red, and green can be applied directly to SVG elements. 3. RGBA Color Model: SVG supports the RGBA color model, which stands for Red, Green, Blue, and Alpha. The alpha channel determines the transparency of the color, allowing for the creation of semi-transparent or fully opaque colors. For example, rgba(255, 0, 0, 0.5) represents a semi-transparent red. 4. Applying Gradients: One powerful way to enhance SVG graphics is by using gradients. Gradients allow for smooth color transitions within an element. CSS provides a straightforward syntax for defining linear or radial gradients, enabling the creation of visually appealing color blends. 5. Color Opacity: CSS enables the manipulation of color opacity through the use of the opacity property. This property can be applied to SVG elements to control their transparency, providing flexibility in achieving the desired visual effects. Understanding the nuances of SVG colors lays the foundation for creating eye-catching and dynamic graphics. By leveraging these color options and employing CSS techniques, you can breathe life into your SVG elements and elevate the overall visual experience of your web content.
Inline Styles for SVG
See the Pen SVG change Color by Toshitaka Niki (@toshitaka) on CodePen. Inline styles play a crucial role in defining the presentation of SVG elements directly within the markup. This method provides a quick and efficient way to apply styles without the need for external CSS files. Let's explore how inline styles can be employed to manipulate the colors of SVG graphics. 1. Style Attribute: SVG elements support the style attribute, where inline styles are declared. The style attribute can include various CSS properties, including those related to color such as fill and stroke. 2. Fill Property: The fill property is used to define the interior color of SVG elements, such as shapes and paths. By specifying a color value within the style attribute, you can instantly change the fill color of an SVG element. For example, style="fill: #3498db;" sets the fill color to a shade of blue. 3. Stroke Property: For elements with an outline or border, the stroke property controls the color of the outline. Similar to the fill property, you can apply the stroke property directly within the style attribute. For instance, style="stroke: #e74c3c;" sets the outline color to a vibrant red. 4. Inline Styles with Gradients: Inline styles can also be used to apply gradients directly to SVG elements. By combining the fill property with gradient definitions, you can achieve complex and visually appealing color transitions within the SVG graphic. 5. Pros and Cons: While inline styles offer simplicity and quick application, it's essential to consider their impact on maintainability, especially in larger projects. Using external CSS files may provide a more organized and scalable approach, separating style from structure. In conclusion, leveraging inline styles for SVG elements allows for immediate and targeted color changes directly within the markup. Whether you're fine-tuning individual elements or prototyping a design, understanding how to apply inline styles effectively is a valuable skill in creating vibrant and dynamic SVG graphics.
CSS Classes and SVG
Utilizing CSS classes is a powerful and organized way to apply styles consistently across SVG elements. This approach promotes maintainability, reusability, and a cleaner separation of concerns. Let's explore how CSS classes can be effectively employed to style SVG graphics. 1. Class Attribute: SVG elements support the class attribute, allowing you to assign one or more classes to an element. By defining styles in CSS for these classes, you can ensure a uniform look and feel for multiple SVG elements. 2. Centralized Styling: Creating a CSS class for SVG elements centralizes styling information. This means that changes made to the class definition automatically reflect across all elements with that class, streamlining the maintenance process. 3. Reusability: CSS classes promote the reuse of styles. Once a class is defined, it can be applied to multiple SVG elements throughout your document or across various pages, ensuring a consistent design language. 4. Specificity and Inheritance: CSS classes allow you to control the specificity of styles, determining which styles take precedence. Additionally, inheritance principles apply, enabling child elements to inherit styles from their parent elements, providing a hierarchical and organized structure. 5. Class Naming Conventions: Adopting meaningful and consistent naming conventions for CSS classes enhances code readability and maintenance. Consider using names that reflect the purpose or visual characteristics of the styles, making it easier for yourself and other developers to understand the code. 6. Applying Multiple Classes: SVG elements can have multiple classes, enabling the combination of different styles. This flexibility allows for intricate and varied designs while maintaining a modular and scalable structure. 7. Pros and Cons: While CSS classes offer many advantages, it's crucial to strike a balance. Overuse of classes can lead to unnecessary complexity. Evaluate the scope and scale of your project to determine the most efficient way to manage styles. In summary, CSS classes provide a systematic and efficient approach to styling SVG graphics. By incorporating classes into your SVG design workflow, you can achieve a harmonized visual identity, improve maintainability, and streamline the styling process across your web projects.
Gradients in SVG
Gradients are a powerful tool in the world of Scalable Vector Graphics (SVG), enabling the creation of smooth color transitions and adding depth to your visuals. CSS provides a straightforward way to implement gradients in SVG, enhancing the overall aesthetic appeal of your graphics. 1. Linear Gradients: Linear gradients create a gradual transition of color along a straight line. In SVG, you can specify the starting and ending points of the gradient, as well as the colors and stops along the way. This technique is particularly useful for creating horizontal, vertical, or diagonal color blends within SVG elements. 2. Radial Gradients: Radial gradients radiate outward from a central point, allowing for circular or elliptical color transitions. By defining the gradient's center, focal point, and radius, you can achieve visually interesting effects. Radial gradients are excellent for creating highlights and shadows in SVG graphics. 3. Gradient Stops: Gradients consist of color stops, indicating where the color transition occurs. Each stop specifies a color and a position along the gradient line. This level of control allows for precise manipulation of how colors blend within the SVG element. 4. Adding Transparency: Gradients in SVG can include transparent colors, adding an extra layer of complexity to your graphics. By adjusting the alpha channel (opacity) of gradient stops, you can create subtle fades or entirely transparent sections, offering versatility in design. 5. Multiple Color Stops: SVG gradients support multiple color stops, allowing you to create intricate and multi-colored transitions. This feature is particularly beneficial when designing backgrounds, patterns, or complex graphics with varying shades and hues. 6. Applying Gradients to Elements: Gradients can be applied to various SVG elements, such as shapes, paths, and text. By utilizing CSS properties like fill and stroke, you can seamlessly integrate gradients into your SVG graphics, enhancing their visual impact. ColorPositionBlue0%Red100% Implementing gradients in SVG not only adds visual appeal but also provides a powerful tool for expressing creativity in web design. Experiment with different gradient types, color combinations, and transparency levels to discover the full potential of gradients in your SVG graphics.
Filter Effects for SVG
Filter effects in SVG, powered by CSS, open up a world of possibilities for enhancing and manipulating the appearance of SVG graphics. These effects allow you to apply transformations, blurs, and color adjustments, adding depth and creativity to your visual content. 1. The filter Property: The filter property is the gateway to applying filter effects to SVG elements. By assigning a filter value to this property, you can specify the type and parameters of the desired effect. For example, filter: url(#blur) applies a blur effect defined in an SVG filter element. 2. Common Filter Effects: SVG supports a variety of filter effects, including blur, grayscale, sepia, brightness, and contrast. These effects can be combined and adjusted to achieve unique and visually striking results. 3. Combining Filters: Filters can be combined by chaining multiple filter functions together. This allows for the creation of complex visual effects, such as a combination of blur, saturation, and brightness adjustments to achieve a specific look and feel. 4. SVG Filter Elements: Filters are defined using SVG filter elements within the defs section of your SVG document. These filter elements encapsulate the parameters and settings for specific filter effects. Referencing these filters through the filter property brings them to life in your SVG graphics. 5. Dynamic Animation: Filter effects can be animated using CSS animations, adding dynamic visual changes to your SVG elements. This opens up possibilities for creating interactive and engaging user experiences with responsive filter transitions. Filter TypeParametersBlurStandard Deviation: 5SepiaIntensity: 70% 6. Performance Considerations: While filter effects bring creativity to SVG graphics, it's essential to consider their impact on performance, especially on lower-powered devices. Optimize and test your filters to ensure a smooth user experience. Filter effects in SVG provide a versatile toolkit for designers and developers to elevate the visual appeal of graphics. Experimenting with different filter combinations and animations allows you to unleash your creativity and craft visually stunning SVG elements.
Responsive Color Changes
Creating a visually appealing and responsive user interface involves adapting to various screen sizes and devices. Responsive color changes in SVG graphics play a vital role in optimizing the user experience. Let's explore how CSS techniques can be applied to achieve responsive color adjustments in SVG elements. 1. Media Queries: Media queries are a cornerstone of responsive design, allowing you to apply specific styles based on the characteristics of the user's device. By combining media queries with CSS for SVG elements, you can define different color schemes tailored to various screen sizes, resolutions, or orientations. 2. Viewport Units: Utilizing viewport units, such as vw (viewport width) and vh (viewport height), enables relative sizing based on the dimensions of the user's viewport. Applying these units to SVG color properties ensures that color changes respond proportionally to the screen size. 3. Fluid Color Transitions: Implementing fluid color transitions in SVG graphics enhances the responsiveness of your design. By utilizing CSS transitions or animations, you can smoothly change colors based on user interactions, viewport adjustments, or device orientation changes. 4. Color Contrast for Accessibility: Consideration for accessibility is crucial in responsive design. Ensure that color changes maintain sufficient contrast, making content readable for users with varying visual abilities. CSS techniques, such as adjusting the luminance or saturation, can help achieve accessible color contrasts. 5. Device-Specific Color Profiles: Tailor color profiles for specific devices or platforms to create a consistent and visually pleasing experience. This may involve adjusting colors to match the characteristics of different screens, such as those on mobile devices, tablets, or desktop monitors. Viewport WidthColor50vwBlue100vwRed 6. Testing Across Devices: To ensure the effectiveness of responsive color changes, testing across various devices and screen sizes is crucial. Emulators, simulators, or real device testing can help identify and address any color-related issues in different contexts. Responsive color changes in SVG graphics contribute to a seamless and visually pleasing user experience. By implementing these techniques, you can create designs that not only adapt to diverse devices but also enhance the overall aesthetic appeal of your web content.
Animation and Color Transformation
Adding animation to SVG graphics introduces a dynamic dimension to web design, captivating users and conveying information in engaging ways. Color transformation, as a subset of SVG animation, allows for the seamless transition between different colors, creating visually stunning effects. Let's explore the techniques and possibilities of animating color transformations in SVG. 1. CSS Animations: CSS animations provide a straightforward way to bring SVG graphics to life. By defining keyframes and specifying color changes at different points in the animation, you can create smooth and eye-catching color transitions. This approach is particularly effective for highlighting specific elements or guiding user attention. 2. Color Transition Libraries: Leveraging JavaScript libraries, such as Anime.js or GreenSock Animation Platform (GSAP), enhances the complexity and control of color transformations. These libraries offer a wide range of options, including easing functions, delays, and callbacks, enabling precise and intricate color animations in SVG graphics. 3. Hue Rotation: The CSS hue-rotate filter allows for the dynamic rotation of colors within an SVG element. By animating the hue rotation property, you can create mesmerizing color transformations. This technique is particularly effective for creating visually appealing loading spinners or transitioning backgrounds. 4. Saturation and Lightness: Animating the saturation and lightness properties through CSS or JavaScript enables the transformation between vibrant and muted colors, as well as adjusting the overall brightness of SVG graphics. This can be used for transitioning between day and night modes or creating atmospheric effects. 5. Color Looping: Implementing color looping animations involves seamlessly cycling through a set of colors. This technique is often used for decorative elements, branding animations, or simply to add a playful touch to SVG graphics. CSS animations or JavaScript can be employed to achieve this effect. DegreeColor0Blue180Red 6. Accessibility Considerations: When incorporating color transformations, it's essential to consider accessibility. Ensure that the color changes maintain sufficient contrast and are easily distinguishable for all users, including those with visual impairments. Animation and color transformation in SVG graphics offer a creative outlet for designers and developers. By exploring these techniques, you can craft visually dynamic and interactive web content that leaves a lasting impression on your audience.
FAQ
Explore common questions and answers related to changing SVG colors with CSS. If you have queries about the techniques, compatibility, or best practices, you might find the information you need below. Q: Can I apply CSS styles directly to SVG elements? A: Yes, inline styles using the style attribute can be applied directly to SVG elements. This allows for quick and specific color changes within the SVG markup. Q: Are there advantages to using CSS classes for styling SVG graphics? A: Absolutely. CSS classes provide a systematic and organized way to apply styles consistently across multiple SVG elements. This promotes reusability, maintainability, and a cleaner separation of concerns. Read the full article
0 notes
Text
Crea una Landing Page o página de aterrizaje desde cero con HTML y CSS
En este tutorial, aprenderemos a construir una Landing page o página de aterrizaje, sencilla, desde cero, para una lavandería utilizando HTML y CSS. Esta página web será diseñada para una lavandería autoservicio llamada "Lavandería ABC". Exploraremos cómo estructurar la página, incorporar imágenes desde Iconfinder, y aplicar estilos para lograr un diseño atractivo y funcional. Estructura Básica HTML Lavandería ABC Estilos Generales body { font-family: Arial, sans-serif; margin: 0; padding: 0; background-color: #f8f8f8; /* Fondo gris claro */ } Encabezado header { background-color: #3498db; /* Azul */ color: #fff; padding: 20px 0; text-align: center; } Logo y Título h1 { margin: 0; display: flex; align-items: center; justify-content: center; } h1 img { margin-right: 10px; max-width: 80px; } Contenido Principal main { padding: 20px; } Destacado p.highlight { background-color: #fff; padding: 15px; border-radius: 10px; margin-bottom: 15px; text-align: center; box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); /* Sombra suave */ } Lista de Servicios ul { list-style-type: none; padding: 0; display: flex; justify-content: space-between; margin-bottom: 20px; } li { width: 25%; margin-bottom: 5px; text-align: center; } Tabla de Precios table { width: 80%; margin: 0 auto; border-collapse: collapse; margin-bottom: 20px; } th, td { border: 1px solid #ddd; padding: 8px; text-align: left; } th { background-color: #f2f2f2; } Imágenes img { max-width: 100%; height: auto; } Botones de Acción a.button { display: block; width: 80%; margin: 10px auto; text-decoration: none; padding: 10px; background-color: #3498db; color: #fff; text-align: center; border-radius: 5px; } a.button:hover { background-color: #2980b9; } Títulos y Separadores h2 { color: #333; text-align: center; } hr { margin: 20px 0; border: 0; border-top: 1px solid #ddd; } Sección de Testimonios section.testimonials { background-color: #fff; padding: 20px; border-radius: 10px; box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); margin-bottom: 20px; } Pie de Página footer { background-color: #333; color: #fff; padding: 20px; text-align: center; } Este código combina HTML y CSS para estructurar y estilizar una página web de lavandería. Los estilos están integrados directamente en el archivo HTML usando la etiqueta
Lavandería ABC
Lavandería ABC es una lavandería autoservicio que ofrece los siguientes servicios: -
Lavandería -
Secado -
Planchado -
Limpieza de alfombras
Nuestros precios son los siguientes:
Servicio Precio Lavandería $10 por carga
Secado $5 por carga
Planchado $10 por prenda
Limpieza de alfombras $20 por metro cuadrado
Estamos ubicados en la siguiente dirección:
Calle 123, número 456
Nuestros horarios de atención son los siguientes:
Lunes a viernes de 8:00 a 20:00 horas Sábados de 9:00 a 14:00 horas Contacto Precios
Testimonios
Aquí puedes agregar algunos testimonios de clientes satisfechos con tu servicio. © 2023 Lavandería ABC. Todos los derechos reservados.
Read the full article
0 notes
Text
Running a Random Forest
Task
The second assignment deals with Random Forests. Random forests are predictive models that allow for a data driven exploration of many explanatory variables in predicting a response or target variable. Random forests provide importance scores for each explanatory variable and also allow you to evaluate any increases in correct classification with the growing of smaller and larger number of trees.
Run a Random Forest.
You will need to perform a random forest analysis to evaluate the importance of a series of explanatory variables in predicting a binary, categorical response variable.
Data
The dataset is related to red variants of the Portuguese "Vinho Verde" wine. Due to privacy and logistic issues, only physicochemical (inputs) and sensory (the output) variables are available (e.g. there is no data about grape types, wine brand, wine selling price, etc.).
The classes are ordered and not balanced (e.g. there are munch more normal wines than excellent or poor ones). Outlier detection algorithms could be used to detect the few excellent or poor wines. Also, we are not sure if all input variables are relevant. So it could be interesting to test feature selection methods.
Dataset can be found at UCI Machine Learning Repository
Attribute Information (For more information, read [Cortez et al., 2009]): Input variables (based on physicochemical tests):
1 - fixed acidity
2 - volatile acidity
3 - citric acid
4 - residual sugar
5 - chlorides
6 - free sulfur dioxide
7 - total sulfur dioxide
8 - density
9 - pH
10 - sulphates
11 - alcohol
Output variable (based on sensory data):
12 - quality (score between 0 and 10)
Results
Random forest and ExtraTrees classifier were deployed to evaluate the importance of a series of explanatory variables in predicting a categorical response variable - red wine quality (score between 0 and 10). The following explanatory variables were included: fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, total sulfur dioxide, density, pH, sulphates and alcohol.
The explanatory variables with the highest importance score (evaluated by both classifiers) are alcohol, volatile acidity, sulphates. The accuracy of the Random forest and ExtraTrees clasifier is about 67%, which is quite good for highly unbalanced and hardly distinguished from each other classes. The subsequent growing of multiple trees rather than a single tree, adding a lot to the overall score of the model. For Random forest the number of estimators is 20, while for ExtraTrees classifier - 12, because the second classifier grows up much faster.
Code:
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.manifold import MDS
from sklearn.metrics.pairwise import pairwise_distances
from sklearn.metrics import accuracy_score
import seaborn as sns
%matplotlib inline
rnd_state = 4536
data = pd.read_csv('Data\wine_red.csv', sep=';')
data.info()
Output:
data.head()
Output:
data.describe()
Output:
Plots
For visualization purposes, the number of dimensions was reduced to two by applying MDS method with cosine distance. The plot illustrates that our classes are not clearly divided into parts.
model = MDS(random_state=rnd_state, n_components=2, dissimilarity='precomputed')
%time representation = model.fit_transform(pairwise_distances(data.iloc[:, :11], metric='cosine'))
Wall time: 38.7 s
colors = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]
plt.figure(figsize=(12, 4))
plt.subplot(121) plt.scatter(representation[:, 0], representation[:, 1], c=colors)
plt.subplot(122) sns.countplot(x='quality', data=data, palette=sns.color_palette(colors));
Output:
predictors = data.iloc[:, :11]
target = data.quality
(predictors_train, predictors_test, target_train, target_test) = train_test_split(predictors, target, test_size = .3, random_state = rnd_state)
RandomForest classifier:
list_estimators = list(range(1, 50, 5)) rf_scoring = [] for n_estimators in list_estimators: classifier = RandomForestClassifier(random_state = rnd_state, n_jobs = -1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') rf_scoring.append(score.mean())
plt.plot(list_estimators, rf_scoring)
plt.title('Accuracy VS trees number');
Output:
classifier = RandomForestClassifier(random_state = rnd_state, n_jobs = -1, class_weight='balanced', n_estimators=20) classifier.fit(predictors_train, target_train)
Output:
RandomForestClassifier(bootstrap=True, class_weight='balanced', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=20, n_jobs=-1, oob_score=False, random_state=4536, verbose=0, warm_start=False)
prediction = classifier.predict(predictors_test)
print('Confusion matrix:\n', pd.crosstab(target_test, prediction, colnames=['Predicted'], rownames=['Actual'], margins=True)) print('\nAccuracy: ', accuracy_score(target_test, prediction))
feature_importance = pd.Series(classifier.feature_importances_, index=data.columns.values[:11]).sort_values(ascending=False) feature_importance
Output:
et_scoring = [] for n_estimators in list_estimators: classifier = ExtraTreesClassifier(random_state = rnd_state, n_jobs = -1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') et_scoring.append(score.mean())
plt.plot(list_estimators, et_scoring) plt.title('Accuracy VS trees number');
Output:
classifier = ExtraTreesClassifier(random_state = rnd_state, n_jobs = -1, class_weight='balanced', n_estimators=12) classifier.fit(predictors_train, target_train)
ExtraTreesClassifier(bootstrap=False, class_weight='balanced', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=12, n_jobs=-1, oob_score=False, random_state=4536, verbose=0, warm_start=False)
prediction = classifier.predict(predictors_test)
print('Confusion matrix:\n', pd.crosstab(target_test, prediction, colnames=['Predicted'], rownames=['Actual'], margins=True)) print('\nAccuracy: ', accuracy_score(target_test, prediction))
Output:
feature_importance = pd.Series(classifier.feature_importances_, index=data.columns.values[:11]).sort_values(ascending=False) feature_importance
Output:
Thanks For Reading!
0 notes
Text
Fj
.loader { border: 16px solid #f3f3f3; border-radius: 50%; border-top: 16px solid #3498db; width: 120px; height: 120px; -webkit-animation: spin 2s linear infinite; /* Safari */ animation: spin 2s linear infinite; } /* Safari */ @-webkit-keyframes spin { 0% { -webkit-transform: rotate(0deg); } 100% { -webkit-transform: rotate(360deg); } } @keyframes spin { 0% { transform: rotate(0deg); } 100% {…
View On WordPress
1 note
·
View note
Text
Running a Random Forest
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.ensemble import RandomForestClassifier,
ExtraTreesClassifier
from sklearn.manifold import MDS
from sklearn.metrics.pairwise import pairwise_distances
from sklearn.metrics import accuracy_score
import seaborn as sns
%matplotlib inline
rnd_state = 4536 data = pd.read_csv('Data\winequality-red.csv', sep=';') data.info()
RangeIndex: 1599 entries, 0 to 1598 Data columns (total 12 columns): fixed acidity 1599 non-null float64 volatile acidity 1599 non-null float64 citric acid 1599 non-null float64 residual sugar 1599 non-null float64 chlorides 1599 non-null float64 free sulfur dioxide 1599 non-null float64 total sulfur dioxide 1599 non-null float64 density 1599 non-null float64 pH 1599 non-null float64 sulphates 1599 non-null float64 alcohol 1599 non-null float64 quality 1599 non-null int64 dtypes: float64(11), int64(1) memory usage: 150.0 KB data.head() fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density pH sulphates alcohol quality 0 7.4 0.70 0.00 1.9 0.076 11.0 34.0 0.9978 3.51 0.56 9.4 5 1 7.8 0.88 0.00 2.6 0.098 25.0 67.0 0.9968 3.20 0.68 9.8 5 2 7.8 0.76 0.04 2.3 0.092 15.0 54.0 0.9970 3.26 0.65 9.8 5 3 11.2 0.28 0.56 1.9 0.075 17.0 60.0 0.9980 3.16 0.58 9.8 6 4 7.4 0.70 0.00 1.9 0.076 11.0 34.0 0.9978 3.51 0.56 9.4 5 data.describe() fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density pH sulphates alcohol quality count 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 mean 8.319637 0.527821 0.270976 2.538806 0.087467 15.874922 46.467792 0.996747 3.311113 0.658149 10.422983 5.636023 std 1.741096 0.179060 0.194801 1.409928 0.047065 10.460157 32.895324 0.001887 0.154386 0.169507 1.065668 0.807569 min 4.600000 0.120000 0.000000 0.900000 0.012000 1.000000 6.000000 0.990070 2.740000 0.330000 8.400000 3.000000 25% 7.100000 0.390000 0.090000 1.900000 0.070000 7.000000 22.000000 0.995600 3.210000 0.550000 9.500000 5.000000 50% 7.900000 0.520000 0.260000 2.200000 0.079000 14.000000 38.000000 0.996750 3.310000 0.620000 10.200000 6.000000 75% 9.200000 0.640000 0.420000 2.600000 0.090000 21.000000 62.000000 0.997835 3.400000 0.730000 11.100000 6.000000 max 15.900000 1.580000 1.000000 15.500000 0.611000 72.000000 289.000000 1.003690 4.010000 2.000000 14.900000 8.000000 For visualization purposes, the number of dimensions was reduced to two by applying MDS method with cosine distance. The plot illustrates that our classes are not clearly divided into parts.
model = MDS(random_state=rnd_state, n_components=2, dissimilarity='precomputed') %time representation = model.fit_transform(pairwise_distances(data.iloc[:, :11], metric='cosine')) Wall time: 38.7 s colors = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"] plt.figure(figsize=(12, 4))
plt.subplot(121) plt.scatter(representation[:, 0], representation[:, 1], c=colors)
plt.subplot(122) sns.countplot(x='quality', data=data, palette=sns.color_palette(colors));
predictors = data.iloc[:, :11] target = data.quality
In [8]:(predictors_train, predictors_test, target_train, target_test) = train_test_split(predictors, target, test_size = .3, random_state = rnd_state)
RandomForest classifier
In [9]:list_estimators = list(range(1, 50, 5)) rf_scoring = [] for n_estimators in list_estimators: classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') rf_scoring.append(score.mean())
In [10]:plt.plot(list_estimators, rf_scoring) plt.title('Accuracy VS trees number');
predictors = data.iloc[:, :11] target = data.quality
In [8]:(predictors_train, predictors_test, target_train, target_test) = train_test_split(predictors, target, test_size = .3, random_state = rnd_state)
RandomForest classifier
In [9]:list_estimators = list(range(1, 50, 5)) rf_scoring = [] for n_estimators in list_estimators: classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') rf_scoring.append(score.mean())
In [10]:plt.plot(list_estimators, rf_scoring) plt.title('Accuracy VS trees number');
classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=20) classifier.fit(predictors_train, target_train)
Out[11]:RandomForestClassifier(bootstrap=True, class_weight='balanced', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=20, n_jobs=-1, oob_score=False, random_state=4536, verbose=0, warm_start=False)
In [12]:prediction = classifier.predict(predictors_test)
In [13]:print('Confusion matrix:\n', pd.crosstab(target_test, prediction, colnames=['Predicted'], rownames=['Actual'], margins=True)) print('\nAccuracy: ', accuracy_score(target_test, prediction)) Confusion matrix: Predicted 3 4 5 6 7 All Actual 3 0 0 3 0 0 3 4 0 1 9 6 0 16 5 2 1 166 41 3 213 6 0 0 46 131 14 191 7 0 0 5 25 23 53 8 0 0 0 3 1 4 All 2 2 229 206 41 480 Accuracy: 0.66875
In [14]:feature_importance = pd.Series(classifier.feature_importances_, index=data.columns.values[:11]).sort_values(ascending=False) feature_importance
volatile acidity 0.133023 alcohol 0.130114 sulphates 0.129498 citric acid 0.106427 total sulfur dioxide 0.094647 chlorides 0.086298 density 0.079843 pH 0.066566 residual sugar 0.061344 fixed acidity 0.058251 free sulfur dioxide 0.053990 dtype: float64
In [15]:et_scoring = [] for n_estimators in list_estimators: classifier = ExtraTreesClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') et_scoring.append(score.mean())
In [16]:plt.plot(list_estimators, et_scoring) plt.title('Accuracy VS trees number');
classifier = ExtraTreesClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=12) classifier.fit(predictors_train, target_train)
Out[17]:ExtraTreesClassifier(bootstrap=False, class_weight='balanced', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=12, n_jobs=-1, oob_score=False, random_state=4536, verbose=0, warm_start=False)
In [18]:prediction = classifier.predict(predictors_test)
In [19]:print('Confusion matrix:\n', pd.crosstab(target_test, prediction, colnames=['Predicted'], rownames=['Actual'], margins=True)) print('\nAccuracy: ', accuracy_score(target_test, prediction)) Confusion matrix: Predicted 3 4 5 6 7 8 All Actual 3 0 1 2 0 0 0 3 4 0 0 9 7 0 0 16 5 2 2 168 39 2 0 213 6 0 0 49 130 11 1 191 7 0 0 2 27 24 0 53 8 0 0 0 3 1 0 4 All 2 3 230 206 38 1 480 Accuracy: 0.6708333333333333
In [20]:feature_importance = pd.Series(classifier.feature_importances_, index=data.columns.values[:11]).sort_values(ascending=False) feature_importance
Out[20]:alcohol 0.157267 volatile acidity 0.132768 sulphates 0.100874 citric acid 0.095077 density 0.082334 chlorides 0.079283 total sulfur dioxide 0.076803
pH 0.074638 fixed acidity 0.069826 residual sugar 0.066551 free sulfur dioxide 0.064579 dtype: float64
1 note
·
View note
Text
大学生就医授权:一位母亲的亲身经历与重要启示
近日,一位母亲分享了她的大学生子女在校就医的经历,引发了人们对大学生医疗授权问题的广泛关注和讨论。这位母亲的经历不仅揭示了当前大学医疗系统中存在的潜在问题,也为其他父母提供了宝贵的经验和建议。 案件背景 这位名叫美眉的母亲有一个未满18岁的孩子,目前正在上大学。她的孩子从小就有一些健康问题,在特定季节经常会出现一些不适症状。最近,孩子一直在学校身体不适,即使使用了各种常备药物,症状仍未得到有效缓解。 过敏症状 主要影响分析 由于无法及时预约到家庭医生,美眉的孩子最终决定在学校医疗中心就诊。这个过程中,美眉遇到了以下几个关键问题: 1. 授权要求:由于孩子未满18岁,学校医疗中心要求必须获得父母的授权才能进行诊治。 2. 就诊限制:尽管获得了授权,美眉最初仍被拒绝进入诊室,只能在大厅等候。 3.…
#000#2c3e50#3498db#4682b4#900000#父母须知#紧急医疗#隐私权#faf7f2#健康问题#健康授权#医疗系统#医疗授权#子女健康#学生#学校诊所#家长经验#就医经历#未成年#校园健康#法律保障
0 notes
Text
Running a Random Forest
import pandas as pd import numpy as np import matplotlib.pylab as plt from sklearn.model_selection import train_test_split, cross_val_score from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier from sklearn.manifold import MDS from sklearn.metrics.pairwise import pairwise_distances from sklearn.metrics import accuracy_score import seaborn as sns %matplotlib inline
rnd_state = 4536 data = pd.read_csv('Data\winequality-red.csv', sep=';') data.info()
RangeIndex: 1599 entries, 0 to 1598 Data columns (total 12 columns): fixed acidity 1599 non-null float64 volatile acidity 1599 non-null float64 citric acid 1599 non-null float64 residual sugar 1599 non-null float64 chlorides 1599 non-null float64 free sulfur dioxide 1599 non-null float64 total sulfur dioxide 1599 non-null float64 density 1599 non-null float64 pH 1599 non-null float64 sulphates 1599 non-null float64 alcohol 1599 non-null float64 quality 1599 non-null int64 dtypes: float64(11), int64(1) memory usage: 150.0 KB data.head() fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density pH sulphates alcohol quality 0 7.4 0.70 0.00 1.9 0.076 11.0 34.0 0.9978 3.51 0.56 9.4 5 1 7.8 0.88 0.00 2.6 0.098 25.0 67.0 0.9968 3.20 0.68 9.8 5 2 7.8 0.76 0.04 2.3 0.092 15.0 54.0 0.9970 3.26 0.65 9.8 5 3 11.2 0.28 0.56 1.9 0.075 17.0 60.0 0.9980 3.16 0.58 9.8 6 4 7.4 0.70 0.00 1.9 0.076 11.0 34.0 0.9978 3.51 0.56 9.4 5 data.describe() fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density pH sulphates alcohol quality count 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 mean 8.319637 0.527821 0.270976 2.538806 0.087467 15.874922 46.467792 0.996747 3.311113 0.658149 10.422983 5.636023 std 1.741096 0.179060 0.194801 1.409928 0.047065 10.460157 32.895324 0.001887 0.154386 0.169507 1.065668 0.807569 min 4.600000 0.120000 0.000000 0.900000 0.012000 1.000000 6.000000 0.990070 2.740000 0.330000 8.400000 3.000000 25% 7.100000 0.390000 0.090000 1.900000 0.070000 7.000000 22.000000 0.995600 3.210000 0.550000 9.500000 5.000000 50% 7.900000 0.520000 0.260000 2.200000 0.079000 14.000000 38.000000 0.996750 3.310000 0.620000 10.200000 6.000000 75% 9.200000 0.640000 0.420000 2.600000 0.090000 21.000000 62.000000 0.997835 3.400000 0.730000 11.100000 6.000000 max 15.900000 1.580000 1.000000 15.500000 0.611000 72.000000 289.000000 1.003690 4.010000 2.000000 14.900000 8.000000 For visualization purposes, the number of dimensions was reduced to two by applying MDS method with cosine distance. The plot illustrates that our classes are not clearly divided into parts.
model = MDS(random_state=rnd_state, n_components=2, dissimilarity='precomputed') %time representation = model.fit_transform(pairwise_distances(data.iloc[:, :11], metric='cosine')) Wall time: 38.7 s colors = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"] plt.figure(figsize=(12, 4))
plt.subplot(121) plt.scatter(representation[:, 0], representation[:, 1], c=colors)
plt.subplot(122) sns.countplot(x='quality', data=data, palette=sns.color_palette(colors));
predictors = data.iloc[:, :11] target = data.quality
In [8]:(predictors_train, predictors_test, target_train, target_test) = train_test_split(predictors, target, test_size = .3, random_state = rnd_state)
RandomForest classifier
In [9]:list_estimators = list(range(1, 50, 5)) rf_scoring = [] for n_estimators in list_estimators: classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') rf_scoring.append(score.mean())
In [10]:plt.plot(list_estimators, rf_scoring) plt.title('Accuracy VS trees number');
predictors = data.iloc[:, :11] target = data.quality
In [8]:(predictors_train, predictors_test, target_train, target_test) = train_test_split(predictors, target, test_size = .3, random_state = rnd_state)
RandomForest classifier
In [9]:list_estimators = list(range(1, 50, 5)) rf_scoring = [] for n_estimators in list_estimators: classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') rf_scoring.append(score.mean())
In [10]:plt.plot(list_estimators, rf_scoring) plt.title('Accuracy VS trees number');
classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=20) classifier.fit(predictors_train, target_train)
Out[11]:RandomForestClassifier(bootstrap=True, class_weight='balanced', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=20, n_jobs=-1, oob_score=False, random_state=4536, verbose=0, warm_start=False)
In [12]:prediction = classifier.predict(predictors_test)
In [13]:print('Confusion matrix:\n', pd.crosstab(target_test, prediction, colnames=['Predicted'], rownames=['Actual'], margins=True)) print('\nAccuracy: ', accuracy_score(target_test, prediction)) Confusion matrix: Predicted 3 4 5 6 7 All Actual 3 0 0 3 0 0 3 4 0 1 9 6 0 16 5 2 1 166 41 3 213 6 0 0 46 131 14 191 7 0 0 5 25 23 53 8 0 0 0 3 1 4 All 2 2 229 206 41 480 Accuracy: 0.66875
In [14]:feature_importance = pd.Series(classifier.feature_importances_, index=data.columns.values[:11]).sort_values(ascending=False) feature_importance
volatile acidity 0.133023 alcohol 0.130114 sulphates 0.129498 citric acid 0.106427 total sulfur dioxide 0.094647 chlorides 0.086298 density 0.079843 pH 0.066566 residual sugar 0.061344 fixed acidity 0.058251 free sulfur dioxide 0.053990 dtype: float64
In [15]:et_scoring = [] for n_estimators in list_estimators: classifier = ExtraTreesClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') et_scoring.append(score.mean())
In [16]:plt.plot(list_estimators, et_scoring) plt.title('Accuracy VS trees number');
classifier = ExtraTreesClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=12) classifier.fit(predictors_train, target_train)
Out[17]:ExtraTreesClassifier(bootstrap=False, class_weight='balanced', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=12, n_jobs=-1, oob_score=False, random_state=4536, verbose=0, warm_start=False)
In [18]:prediction = classifier.predict(predictors_test)
In [19]:print('Confusion matrix:\n', pd.crosstab(target_test, prediction, colnames=['Predicted'], rownames=['Actual'], margins=True)) print('\nAccuracy: ', accuracy_score(target_test, prediction)) Confusion matrix: Predicted 3 4 5 6 7 8 All Actual 3 0 1 2 0 0 0 3 4 0 0 9 7 0 0 16 5 2 2 168 39 2 0 213 6 0 0 49 130 11 1 191 7 0 0 2 27 24 0 53 8 0 0 0 3 1 0 4 All 2 3 230 206 38 1 480 Accuracy: 0.6708333333333333
In [20]:feature_importance = pd.Series(classifier.feature_importances_, index=data.columns.values[:11]).sort_values(ascending=False) feature_importance
Out[20]:alcohol 0.157267 volatile acidity 0.132768 sulphates 0.100874 citric acid 0.095077 density 0.082334 chlorides 0.079283 total sulfur dioxide 0.076803
pH 0.074638 fixed acidity 0.069826 residual sugar 0.066551 free sulfur dioxide 0.064579 dtype: float64
0 notes
Text
Running a Random Forest
import pandas as pd import numpy as np import matplotlib.pylab as plt from sklearn.model_selection import train_test_split, cross_val_score from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier from sklearn.manifold import MDS from sklearn.metrics.pairwise import pairwise_distances from sklearn.metrics import accuracy_score import seaborn as sns %matplotlib inline
rnd_state = 4536 data = pd.read_csv('Data\winequality-red.csv', sep=';') data.info()
RangeIndex: 1599 entries, 0 to 1598 Data columns (total 12 columns): fixed acidity 1599 non-null float64 volatile acidity 1599 non-null float64 citric acid 1599 non-null float64 residual sugar 1599 non-null float64 chlorides 1599 non-null float64 free sulfur dioxide 1599 non-null float64 total sulfur dioxide 1599 non-null float64 density 1599 non-null float64 pH 1599 non-null float64 sulphates 1599 non-null float64 alcohol 1599 non-null float64 quality 1599 non-null int64 dtypes: float64(11), int64(1) memory usage: 150.0 KB data.head() fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density pH sulphates alcohol quality 0 7.4 0.70 0.00 1.9 0.076 11.0 34.0 0.9978 3.51 0.56 9.4 5 1 7.8 0.88 0.00 2.6 0.098 25.0 67.0 0.9968 3.20 0.68 9.8 5 2 7.8 0.76 0.04 2.3 0.092 15.0 54.0 0.9970 3.26 0.65 9.8 5 3 11.2 0.28 0.56 1.9 0.075 17.0 60.0 0.9980 3.16 0.58 9.8 6 4 7.4 0.70 0.00 1.9 0.076 11.0 34.0 0.9978 3.51 0.56 9.4 5 data.describe() fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density pH sulphates alcohol quality count 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 mean 8.319637 0.527821 0.270976 2.538806 0.087467 15.874922 46.467792 0.996747 3.311113 0.658149 10.422983 5.636023 std 1.741096 0.179060 0.194801 1.409928 0.047065 10.460157 32.895324 0.001887 0.154386 0.169507 1.065668 0.807569 min 4.600000 0.120000 0.000000 0.900000 0.012000 1.000000 6.000000 0.990070 2.740000 0.330000 8.400000 3.000000 25% 7.100000 0.390000 0.090000 1.900000 0.070000 7.000000 22.000000 0.995600 3.210000 0.550000 9.500000 5.000000 50% 7.900000 0.520000 0.260000 2.200000 0.079000 14.000000 38.000000 0.996750 3.310000 0.620000 10.200000 6.000000 75% 9.200000 0.640000 0.420000 2.600000 0.090000 21.000000 62.000000 0.997835 3.400000 0.730000 11.100000 6.000000 max 15.900000 1.580000 1.000000 15.500000 0.611000 72.000000 289.000000 1.003690 4.010000 2.000000 14.900000 8.000000 For visualization purposes, the number of dimensions was reduced to two by applying MDS method with cosine distance. The plot illustrates that our classes are not clearly divided into parts.
model = MDS(random_state=rnd_state, n_components=2, dissimilarity='precomputed') %time representation = model.fit_transform(pairwise_distances(data.iloc[:, :11], metric='cosine')) Wall time: 38.7 s colors = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"] plt.figure(figsize=(12, 4))
plt.subplot(121) plt.scatter(representation[:, 0], representation[:, 1], c=colors)
plt.subplot(122) sns.countplot(x='quality', data=data, palette=sns.color_palette(colors));
predictors = data.iloc[:, :11] target = data.quality
In [8]:(predictors_train, predictors_test, target_train, target_test) = train_test_split(predictors, target, test_size = .3, random_state = rnd_state)
RandomForest classifier
In [9]:list_estimators = list(range(1, 50, 5)) rf_scoring = [] for n_estimators in list_estimators: classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') rf_scoring.append(score.mean())
In [10]:plt.plot(list_estimators, rf_scoring) plt.title('Accuracy VS trees number');
predictors = data.iloc[:, :11] target = data.quality
In [8]:(predictors_train, predictors_test, target_train, target_test) = train_test_split(predictors, target, test_size = .3, random_state = rnd_state)
RandomForest classifier
In [9]:list_estimators = list(range(1, 50, 5)) rf_scoring = [] for n_estimators in list_estimators: classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') rf_scoring.append(score.mean())
In [10]:plt.plot(list_estimators, rf_scoring) plt.title('Accuracy VS trees number');
classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=20) classifier.fit(predictors_train, target_train)
Out[11]:RandomForestClassifier(bootstrap=True, class_weight='balanced', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=20, n_jobs=-1, oob_score=False, random_state=4536, verbose=0, warm_start=False)
In [12]:prediction = classifier.predict(predictors_test)
In [13]:print('Confusion matrix:\n', pd.crosstab(target_test, prediction, colnames=['Predicted'], rownames=['Actual'], margins=True)) print('\nAccuracy: ', accuracy_score(target_test, prediction)) Confusion matrix: Predicted 3 4 5 6 7 All Actual 3 0 0 3 0 0 3 4 0 1 9 6 0 16 5 2 1 166 41 3 213 6 0 0 46 131 14 191 7 0 0 5 25 23 53 8 0 0 0 3 1 4 All 2 2 229 206 41 480 Accuracy: 0.66875
In [14]:feature_importance = pd.Series(classifier.feature_importances_, index=data.columns.values[:11]).sort_values(ascending=False) feature_importance
volatile acidity 0.133023 alcohol 0.130114 sulphates 0.129498 citric acid 0.106427 total sulfur dioxide 0.094647 chlorides 0.086298 density 0.079843 pH 0.066566 residual sugar 0.061344 fixed acidity 0.058251 free sulfur dioxide 0.053990 dtype: float64
In [15]:et_scoring = [] for n_estimators in list_estimators: classifier = ExtraTreesClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') et_scoring.append(score.mean())
In [16]:plt.plot(list_estimators, et_scoring) plt.title('Accuracy VS trees number');
classifier = ExtraTreesClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=12) classifier.fit(predictors_train, target_train)
Out[17]:ExtraTreesClassifier(bootstrap=False, class_weight='balanced', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=12, n_jobs=-1, oob_score=False, random_state=4536, verbose=0, warm_start=False)
In [18]:prediction = classifier.predict(predictors_test)
In [19]:print('Confusion matrix:\n', pd.crosstab(target_test, prediction, colnames=['Predicted'], rownames=['Actual'], margins=True)) print('\nAccuracy: ', accuracy_score(target_test, prediction)) Confusion matrix: Predicted 3 4 5 6 7 8 All Actual 3 0 1 2 0 0 0 3 4 0 0 9 7 0 0 16 5 2 2 168 39 2 0 213 6 0 0 49 130 11 1 191 7 0 0 2 27 24 0 53 8 0 0 0 3 1 0 4 All 2 3 230 206 38 1 480 Accuracy: 0.6708333333333333
In [20]:feature_importance = pd.Series(classifier.feature_importances_, index=data.columns.values[:11]).sort_values(ascending=False) feature_importance
Out[20]:alcohol 0.157267 volatile acidity 0.132768 sulphates 0.100874 citric acid 0.095077 density 0.082334 chlorides 0.079283 total sulfur dioxide 0.076803
pH 0.074638 fixed acidity 0.069826 residual sugar 0.066551 free sulfur dioxide 0.064579 dtype: float64
0 notes
Text
Running a Random Forest
import pandas as pd import numpy as np import matplotlib.pylab as plt from sklearn.model_selection import train_test_split, cross_val_score from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier from sklearn.manifold import MDS from sklearn.metrics.pairwise import pairwise_distances from sklearn.metrics import accuracy_score import seaborn as sns %matplotlib inline
rnd_state = 4536 data = pd.read_csv('Data\winequality-red.csv', sep=';') data.info()
RangeIndex: 1599 entries, 0 to 1598 Data columns (total 12 columns): fixed acidity 1599 non-null float64 volatile acidity 1599 non-null float64 citric acid 1599 non-null float64 residual sugar 1599 non-null float64 chlorides 1599 non-null float64 free sulfur dioxide 1599 non-null float64 total sulfur dioxide 1599 non-null float64 density 1599 non-null float64 pH 1599 non-null float64 sulphates 1599 non-null float64 alcohol 1599 non-null float64 quality 1599 non-null int64 dtypes: float64(11), int64(1) memory usage: 150.0 KB data.head() fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density pH sulphates alcohol quality 0 7.4 0.70 0.00 1.9 0.076 11.0 34.0 0.9978 3.51 0.56 9.4 5 1 7.8 0.88 0.00 2.6 0.098 25.0 67.0 0.9968 3.20 0.68 9.8 5 2 7.8 0.76 0.04 2.3 0.092 15.0 54.0 0.9970 3.26 0.65 9.8 5 3 11.2 0.28 0.56 1.9 0.075 17.0 60.0 0.9980 3.16 0.58 9.8 6 4 7.4 0.70 0.00 1.9 0.076 11.0 34.0 0.9978 3.51 0.56 9.4 5 data.describe() fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density pH sulphates alcohol quality count 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 mean 8.319637 0.527821 0.270976 2.538806 0.087467 15.874922 46.467792 0.996747 3.311113 0.658149 10.422983 5.636023 std 1.741096 0.179060 0.194801 1.409928 0.047065 10.460157 32.895324 0.001887 0.154386 0.169507 1.065668 0.807569 min 4.600000 0.120000 0.000000 0.900000 0.012000 1.000000 6.000000 0.990070 2.740000 0.330000 8.400000 3.000000 25% 7.100000 0.390000 0.090000 1.900000 0.070000 7.000000 22.000000 0.995600 3.210000 0.550000 9.500000 5.000000 50% 7.900000 0.520000 0.260000 2.200000 0.079000 14.000000 38.000000 0.996750 3.310000 0.620000 10.200000 6.000000 75% 9.200000 0.640000 0.420000 2.600000 0.090000 21.000000 62.000000 0.997835 3.400000 0.730000 11.100000 6.000000 max 15.900000 1.580000 1.000000 15.500000 0.611000 72.000000 289.000000 1.003690 4.010000 2.000000 14.900000 8.000000 For visualization purposes, the number of dimensions was reduced to two by applying MDS method with cosine distance. The plot illustrates that our classes are not clearly divided into parts.
model = MDS(random_state=rnd_state, n_components=2, dissimilarity='precomputed') %time representation = model.fit_transform(pairwise_distances(data.iloc[:, :11], metric='cosine')) Wall time: 38.7 s colors = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"] plt.figure(figsize=(12, 4))
plt.subplot(121) plt.scatter(representation[:, 0], representation[:, 1], c=colors)
plt.subplot(122) sns.countplot(x='quality', data=data, palette=sns.color_palette(colors));
predictors = data.iloc[:, :11] target = data.quality
In [8]:(predictors_train, predictors_test, target_train, target_test) = train_test_split(predictors, target, test_size = .3, random_state = rnd_state)
RandomForest classifier
In [9]:list_estimators = list(range(1, 50, 5)) rf_scoring = [] for n_estimators in list_estimators: classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') rf_scoring.append(score.mean())
In [10]:plt.plot(list_estimators, rf_scoring) plt.title('Accuracy VS trees number');
predictors = data.iloc[:, :11] target = data.quality
In [8]:(predictors_train, predictors_test, target_train, target_test) = train_test_split(predictors, target, test_size = .3, random_state = rnd_state)
RandomForest classifier
In [9]:list_estimators = list(range(1, 50, 5)) rf_scoring = [] for n_estimators in list_estimators: classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') rf_scoring.append(score.mean())
In [10]:plt.plot(list_estimators, rf_scoring) plt.title('Accuracy VS trees number');
classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=20) classifier.fit(predictors_train, target_train)
Out[11]:RandomForestClassifier(bootstrap=True, class_weight='balanced', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=20, n_jobs=-1, oob_score=False, random_state=4536, verbose=0, warm_start=False)
In [12]:prediction = classifier.predict(predictors_test)
In [13]:print('Confusion matrix:\n', pd.crosstab(target_test, prediction, colnames=['Predicted'], rownames=['Actual'], margins=True)) print('\nAccuracy: ', accuracy_score(target_test, prediction)) Confusion matrix: Predicted 3 4 5 6 7 All Actual 3 0 0 3 0 0 3 4 0 1 9 6 0 16 5 2 1 166 41 3 213 6 0 0 46 131 14 191 7 0 0 5 25 23 53 8 0 0 0 3 1 4 All 2 2 229 206 41 480 Accuracy: 0.66875
In [14]:feature_importance = pd.Series(classifier.feature_importances_, index=data.columns.values[:11]).sort_values(ascending=False) feature_importance
volatile acidity 0.133023 alcohol 0.130114 sulphates 0.129498 citric acid 0.106427 total sulfur dioxide 0.094647 chlorides 0.086298 density 0.079843 pH 0.066566 residual sugar 0.061344 fixed acidity 0.058251 free sulfur dioxide 0.053990 dtype: float64
In [15]:et_scoring = [] for n_estimators in list_estimators: classifier = ExtraTreesClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') et_scoring.append(score.mean())
In [16]:plt.plot(list_estimators, et_scoring) plt.title('Accuracy VS trees number');
classifier = ExtraTreesClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=12) classifier.fit(predictors_train, target_train)
Out[17]:ExtraTreesClassifier(bootstrap=False, class_weight='balanced', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=12, n_jobs=-1, oob_score=False, random_state=4536, verbose=0, warm_start=False)
In [18]:prediction = classifier.predict(predictors_test)
In [19]:print('Confusion matrix:\n', pd.crosstab(target_test, prediction, colnames=['Predicted'], rownames=['Actual'], margins=True)) print('\nAccuracy: ', accuracy_score(target_test, prediction)) Confusion matrix: Predicted 3 4 5 6 7 8 All Actual 3 0 1 2 0 0 0 3 4 0 0 9 7 0 0 16 5 2 2 168 39 2 0 213 6 0 0 49 130 11 1 191 7 0 0 2 27 24 0 53 8 0 0 0 3 1 0 4 All 2 3 230 206 38 1 480 Accuracy: 0.6708333333333333
In [20]:feature_importance = pd.Series(classifier.feature_importances_, index=data.columns.values[:11]).sort_values(ascending=False) feature_importance
Out[20]:alcohol 0.157267 volatile acidity 0.132768 sulphates 0.100874 citric acid 0.095077 density 0.082334 chlorides 0.079283 total sulfur dioxide 0.076803
pH 0.074638 fixed acidity 0.069826 residual sugar 0.066551 free sulfur dioxide 0.064579 dtype: float64
0 notes
Text
Running a Random Forest
import pandas as pd import numpy as np import matplotlib.pylab as plt from sklearn.model_selection import train_test_split, cross_val_score from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier from sklearn.manifold import MDS from sklearn.metrics.pairwise import pairwise_distances from sklearn.metrics import accuracy_score import seaborn as sns %matplotlib inline
rnd_state = 4536 data = pd.read_csv('Data\winequality-red.csv', sep=';') data.info()
RangeIndex: 1599 entries, 0 to 1598 Data columns (total 12 columns): fixed acidity 1599 non-null float64 volatile acidity 1599 non-null float64 citric acid 1599 non-null float64 residual sugar 1599 non-null float64 chlorides 1599 non-null float64 free sulfur dioxide 1599 non-null float64 total sulfur dioxide 1599 non-null float64 density 1599 non-null float64 pH 1599 non-null float64 sulphates 1599 non-null float64 alcohol 1599 non-null float64 quality 1599 non-null int64 dtypes: float64(11), int64(1) memory usage: 150.0 KB data.head() fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density pH sulphates alcohol quality 0 7.4 0.70 0.00 1.9 0.076 11.0 34.0 0.9978 3.51 0.56 9.4 5 1 7.8 0.88 0.00 2.6 0.098 25.0 67.0 0.9968 3.20 0.68 9.8 5 2 7.8 0.76 0.04 2.3 0.092 15.0 54.0 0.9970 3.26 0.65 9.8 5 3 11.2 0.28 0.56 1.9 0.075 17.0 60.0 0.9980 3.16 0.58 9.8 6 4 7.4 0.70 0.00 1.9 0.076 11.0 34.0 0.9978 3.51 0.56 9.4 5 data.describe() fixed acidity volatile acidity citric acid residual sugar chlorides free sulfur dioxide total sulfur dioxide density pH sulphates alcohol quality count 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 1599.000000 mean 8.319637 0.527821 0.270976 2.538806 0.087467 15.874922 46.467792 0.996747 3.311113 0.658149 10.422983 5.636023 std 1.741096 0.179060 0.194801 1.409928 0.047065 10.460157 32.895324 0.001887 0.154386 0.169507 1.065668 0.807569 min 4.600000 0.120000 0.000000 0.900000 0.012000 1.000000 6.000000 0.990070 2.740000 0.330000 8.400000 3.000000 25% 7.100000 0.390000 0.090000 1.900000 0.070000 7.000000 22.000000 0.995600 3.210000 0.550000 9.500000 5.000000 50% 7.900000 0.520000 0.260000 2.200000 0.079000 14.000000 38.000000 0.996750 3.310000 0.620000 10.200000 6.000000 75% 9.200000 0.640000 0.420000 2.600000 0.090000 21.000000 62.000000 0.997835 3.400000 0.730000 11.100000 6.000000 max 15.900000 1.580000 1.000000 15.500000 0.611000 72.000000 289.000000 1.003690 4.010000 2.000000 14.900000 8.000000 For visualization purposes, the number of dimensions was reduced to two by applying MDS method with cosine distance. The plot illustrates that our classes are not clearly divided into parts.
model = MDS(random_state=rnd_state, n_components=2, dissimilarity='precomputed') %time representation = model.fit_transform(pairwise_distances(data.iloc[:, :11], metric='cosine')) Wall time: 38.7 s colors = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"] plt.figure(figsize=(12, 4))
plt.subplot(121) plt.scatter(representation[:, 0], representation[:, 1], c=colors)
plt.subplot(122) sns.countplot(x='quality', data=data, palette=sns.color_palette(colors));
predictors = data.iloc[:, :11] target = data.quality
In [8]:(predictors_train, predictors_test, target_train, target_test) = train_test_split(predictors, target, test_size = .3, random_state = rnd_state)
RandomForest classifier
In [9]:list_estimators = list(range(1, 50, 5)) rf_scoring = [] for n_estimators in list_estimators: classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') rf_scoring.append(score.mean())
In [10]:plt.plot(list_estimators, rf_scoring) plt.title('Accuracy VS trees number');
predictors = data.iloc[:, :11] target = data.quality
In [8]:(predictors_train, predictors_test, target_train, target_test) = train_test_split(predictors, target, test_size = .3, random_state = rnd_state)
RandomForest classifier
In [9]:list_estimators = list(range(1, 50, 5)) rf_scoring = [] for n_estimators in list_estimators: classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') rf_scoring.append(score.mean())
In [10]:plt.plot(list_estimators, rf_scoring) plt.title('Accuracy VS trees number');
classifier = RandomForestClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=20) classifier.fit(predictors_train, target_train)
Out[11]:RandomForestClassifier(bootstrap=True, class_weight='balanced', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=20, n_jobs=-1, oob_score=False, random_state=4536, verbose=0, warm_start=False)
In [12]:prediction = classifier.predict(predictors_test)
In [13]:print('Confusion matrix:\n', pd.crosstab(target_test, prediction, colnames=['Predicted'], rownames=['Actual'], margins=True)) print('\nAccuracy: ', accuracy_score(target_test, prediction)) Confusion matrix: Predicted 3 4 5 6 7 All Actual 3 0 0 3 0 0 3 4 0 1 9 6 0 16 5 2 1 166 41 3 213 6 0 0 46 131 14 191 7 0 0 5 25 23 53 8 0 0 0 3 1 4 All 2 2 229 206 41 480 Accuracy: 0.66875
In [14]:feature_importance = pd.Series(classifier.feature_importances_, index=data.columns.values[:11]).sort_values(ascending=False) feature_importance
volatile acidity 0.133023 alcohol 0.130114 sulphates 0.129498 citric acid 0.106427 total sulfur dioxide 0.094647 chlorides 0.086298 density 0.079843 pH 0.066566 residual sugar 0.061344 fixed acidity 0.058251 free sulfur dioxide 0.053990 dtype: float64
In [15]:et_scoring = [] for n_estimators in list_estimators: classifier = ExtraTreesClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=n_estimators) score = cross_val_score(classifier, predictors_train, target_train, cv=5, n_jobs=-1, scoring = 'accuracy') et_scoring.append(score.mean())
In [16]:plt.plot(list_estimators, et_scoring) plt.title('Accuracy VS trees number');
classifier = ExtraTreesClassifier(random_state = rnd_state, n_jobs =-1, class_weight='balanced', n_estimators=12) classifier.fit(predictors_train, target_train)
Out[17]:ExtraTreesClassifier(bootstrap=False, class_weight='balanced', criterion='gini', max_depth=None, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=12, n_jobs=-1, oob_score=False, random_state=4536, verbose=0, warm_start=False)
In [18]:prediction = classifier.predict(predictors_test)
In [19]:print('Confusion matrix:\n', pd.crosstab(target_test, prediction, colnames=['Predicted'], rownames=['Actual'], margins=True)) print('\nAccuracy: ', accuracy_score(target_test, prediction)) Confusion matrix: Predicted 3 4 5 6 7 8 All Actual 3 0 1 2 0 0 0 3 4 0 0 9 7 0 0 16 5 2 2 168 39 2 0 213 6 0 0 49 130 11 1 191 7 0 0 2 27 24 0 53 8 0 0 0 3 1 0 4 All 2 3 230 206 38 1 480 Accuracy: 0.6708333333333333
In [20]:feature_importance = pd.Series(classifier.feature_importances_, index=data.columns.values[:11]).sort_values(ascending=False) feature_importance
Out[20]:alcohol 0.157267 volatile acidity 0.132768 sulphates 0.100874 citric acid 0.095077 density 0.082334 chlorides 0.079283 total sulfur dioxide 0.076803
pH 0.074638 fixed acidity 0.069826 residual sugar 0.066551 free sulfur dioxide 0.064579 dtype: float64
0 notes
Text
Animate Like a Pro with CSS Magic
Introduction
Welcome to the world of CSS Magic, where animation transforms from a simple web feature to a captivating art form. In this blog post, we'll embark on a journey to explore the nuances of CSS animation and discover how CSS Magic can take your animation skills to the next level. CSS animation is a powerful tool for enhancing user experiences, bringing websites to life, and adding that extra touch of creativity. As we delve into the intricacies of animation, we'll uncover the fundamentals and then introduce the magic – CSS Magic – that will enable you to animate like a pro. Whether you're a beginner looking to grasp the basics or an experienced developer seeking advanced techniques, this post has something for everyone. Let's dive in and unleash the potential of CSS Magic in the world of web animation.
The Basics of CSS Animation
See the Pen #2 - Project Deadline - SVG animation with CSS3 by Jonathan Trancozo (@jtrancozo) on CodePen. CSS animation serves as a fundamental element in web design, allowing developers to breathe life into static pages. Before we explore the enchanting world of CSS Magic, let's first solidify our understanding of the basics of CSS animation. Keyframe Animations: Keyframes are the building blocks of CSS animations. They define the starting and ending points of an animation sequence, and everything in between. By specifying keyframes, you can create smooth and dynamic transitions. Transition Properties: CSS provides transition properties that determine how an element should transition between different states. These properties include transition-property, transition-duration, transition-timing-function, and transition-delay. Utilizing these properties allows for more control over the animation process. Common Pitfalls for Beginners: As you embark on your animation journey, it's crucial to be aware of common pitfalls that beginners often encounter. These may include: - Overcomplicating animations - Ignoring browser compatibility - Forgetting to optimize for performance Addressing these pitfalls early on will pave the way for a smoother animation experience. Now, let's briefly touch on how these basics translate into practical implementation. Consider the following example: HTML CSS.box { width: 100px; height: 100px; background-color: #3498db; transition: width 2s, height 2s, transform 2s; } .box:hover { width: 200px; height: 200px; transform: rotate(360deg); } In this example, hovering over the box element triggers a smooth transition, showcasing the basics of CSS animation. As we continue our journey, we'll build upon these fundamentals and introduce the magic that CSS Magic brings to the table.
Unleashing the Power of CSS Magic
Now that we have a solid grasp of the basics, it's time to elevate our animation skills by delving into the enchanting realm of CSS Magic. CSS Magic goes beyond traditional animation techniques, offering a set of advanced tools and features that can truly make your animations stand out. Exploring Advanced Animation Techniques: CSS Magic introduces a variety of advanced animation techniques that go beyond the standard transitions and keyframes. These techniques include: - CSS Variables: Leverage the power of variables to make your animations more dynamic and customizable. - Custom Timing Functions: Define your own timing functions for precise control over the acceleration and deceleration of animations. - Bezier Curves: Fine-tune the easing of animations with Bezier curves, providing a smooth and tailored motion. Using CSS Magic to Simplify Complex Animations: Complex animations often involve intricate code structures. CSS Magic simplifies this process by offering a more intuitive and concise syntax. Developers can achieve impressive animations with fewer lines of code, enhancing both readability and maintainability. Showcasing Real-World Examples: To illustrate the potential of CSS Magic, let's take a look at a real-world example: HTML CSS.magic-box { width: 100px; height: 100px; background-color: #e74c3c; animation: magic 2s infinite; } @keyframes magic { 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } } In this example, the "magic-box" element undergoes a continuous rotation, creating a visually appealing effect with just a few lines of code. CSS Magic empowers developers to achieve such results effortlessly. As we continue our exploration, we'll further unlock the potential of CSS Magic and discover how it can be harnessed to create stunning and imaginative animations.
Optimizing Performance
While creating visually captivating animations is exciting, it's equally important to ensure optimal performance to guarantee a seamless user experience. In this section, we'll delve into best practices for optimizing CSS animations and explore how CSS Magic can contribute to a performant web animation environment. Best Practices for Efficient Animations: To maintain smooth performance, adhere to the following best practices: - Minimize Animations: Avoid excessive use of animations on a single page. Focus on enhancing key elements rather than overwhelming the user with constant motion. - Use Hardware Acceleration: Leverage hardware acceleration by applying animations to GPU-accelerated properties like transform and opacity. - Optimize Keyframes: Streamline keyframe animations by minimizing the number of keyframes and optimizing the properties being animated. Minimizing Browser Rendering Bottlenecks: Performance bottlenecks can arise during the rendering process. CSS Magic provides solutions to mitigate these bottlenecks: - Compositing: Utilize compositing to reduce the impact of animations on browser rendering. This involves creating a separate layer for animated elements. - Hardware Acceleration: Once again, leverage GPU acceleration to offload animation rendering to the user's device hardware. Utilizing CSS Magic for Performance Optimization: CSS Magic incorporates features designed to enhance performance: - Efficient Syntax: CSS Magic employs a concise and efficient syntax, resulting in smaller file sizes and faster loading times. - Smart Defaults: The framework includes intelligent defaults that automatically optimize animations for performance without sacrificing visual appeal. By following these best practices and leveraging the capabilities of CSS Magic, you can strike a balance between creating captivating animations and ensuring a high level of performance on diverse web platforms. As we move forward, let's explore how to apply these optimization techniques to our animations and achieve the best of both worlds.
Responsive Animations
In the dynamic landscape of web design, responsive animations play a crucial role in delivering a seamless experience across various devices and screen sizes. In this section, we'll explore the principles of creating animations that gracefully adapt to different contexts and unveil how CSS Magic can enhance the responsiveness of your animated elements. Creating Animations for Different Screen Sizes: Responsive animations should consider the diversity of screen sizes, from large desktop monitors to small mobile devices. To achieve this, incorporate the following strategies: - Media Queries: Implement media queries in your CSS to apply specific styles and animations based on the characteristics of the user's device. - Relative Units: Use relative units like percentages and ems for animation properties to ensure elements scale proportionally on different screen sizes. - Flexible Grids: Employ flexible grid systems to structure your layout, enabling smooth transitions between different screen sizes. Media Queries and Their Role in Responsive Animation: Media queries are a cornerstone of responsive design, and they play a vital role in tailoring animations for different devices. Consider the following example: CSS@media screen and (max-width: 768px) { .responsive-element { width: 100%; animation: slide-in 1s ease-in-out; } } In this example, the animation changes when the screen width is 768 pixels or less, ensuring a smooth and responsive user experience on smaller screens. Applying CSS Magic for Enhanced Responsiveness: CSS Magic introduces features that simplify the creation of responsive animations: - Responsive Animation Classes: CSS Magic provides pre-built classes specifically designed for responsive animations, reducing the need for extensive custom coding. - Viewport Units: Take advantage of viewport units like vw and vh within CSS Magic to create animations that adapt to the user's viewport size. By combining traditional responsive design principles with the magic of CSS Magic, you can craft animations that seamlessly adapt to the diverse array of devices in the modern digital landscape. As we conclude our exploration, let's reflect on the key takeaways and the exciting possibilities that responsive animations bring to web development.
FAQ
Explore answers to frequently asked questions about CSS animation and CSS Magic to enhance your understanding and troubleshoot common challenges. Q1: What are the key benefits of using CSS animation? A1: CSS animation brings websites to life, improving user engagement and providing a visually appealing experience. It is a lightweight and efficient way to add dynamic elements without relying on external libraries or plugins. Q2: How can I troubleshoot animation issues? A2: When facing animation issues, consider the following steps: - Check browser compatibility for CSS features used. - Inspect the browser console for error messages. - Review and validate the animation code for syntax errors. Q3: Does CSS Magic work well with other animation libraries? A3: Yes, CSS Magic is designed to complement existing animation libraries. It can be seamlessly integrated into projects using libraries like Anime.js or GreenSock Animation Platform (GSAP), enhancing their capabilities. Q4: How can I make my animations more performant? A4: To optimize animation performance, follow these tips: - Minimize the number of animations on a page. - Utilize GPU-accelerated properties like transform and opacity. - Apply hardware acceleration to offload rendering tasks to the GPU. Q5: Is CSS Magic suitable for responsive design? A5: Absolutely. CSS Magic includes features such as responsive animation classes and viewport units, making it well-suited for creating animations that adapt seamlessly to various screen sizes and devices. For more detailed information and troubleshooting tips, refer to the documentation or community forums associated with CSS Magic. If you have specific questions not addressed here, feel free to reach out to the vibrant community of developers exploring the magic of CSS animation.
Conclusion
Congratulations on embarking on a journey through the captivating world of CSS animation, enriched by the magical touch of CSS Magic. As we conclude this exploration, let's reflect on the key takeaways and the boundless possibilities that await you in the realm of web animation. Throughout this blog post, we've covered the essentials of CSS animation, from the foundational keyframes and transition properties to the advanced techniques that CSS Magic brings to the table. We've delved into optimizing performance, ensuring that your animations not only dazzle the audience but also load swiftly and run seamlessly across different devices. The power of CSS Magic lies in its ability to simplify complex animations, providing developers with an intuitive and efficient framework. Real-world examples have showcased the elegance of concise code, highlighting how CSS Magic can transform ideas into visually stunning realities with ease. Responsive animations have taken center stage, emphasizing the importance of adapting to the diverse landscape of devices and screen sizes. By combining traditional responsive design principles with the features offered by CSS Magic, you have the tools to create animations that delight users across the digital spectrum. As you continue your journey into the world of web development, remember that CSS Magic is not just a framework; it's an enchanting companion that amplifies your creativity and simplifies the intricacies of animation. Whether you're a seasoned developer or just starting, the magic of CSS animation is now at your fingertips. May your animations be smooth, your code be elegant, and your web experiences be truly magical. Keep exploring, keep creating, and let the magic of CSS animation inspire your digital adventures! Read the full article
0 notes
Text
assignment-2
Data
The dataset is related to red variants of the Portuguese "Vinho Verde" wine. Due to privacy and logistic issues, only physicochemical (inputs) and sensory (the output) variables are available (e.g. there is no data about grape types, wine brand, wine selling price, etc.).
The classes are ordered and not balanced (e.g. there are munch more normal wines than excellent or poor ones). Outlier detection algorithms could be used to detect the few excellent or poor wines. Also, we are not sure if all input variables are relevant. So it could be interesting to test feature selection methods.
import pandas as pdimport numpy as npimport matplotlib.pylab as pltfrom sklearn.model_selection import train_test_split, cross_val_scorefrom sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifierfrom sklearn.manifold import MDSfrom sklearn.metrics.pairwise import pairwise_distancesfrom sklearn.metrics import accuracy_scoreimport seaborn as sns%matplotlib inline rnd_state = 4536
1 note
·
View note