Search results for: time series data mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 38106

Search results for: time series data mining

34326 Comparative Analysis of Change in Vegetation in Four Districts of Punjab through Satellite Imagery, Land Use Statistics and Machine Learning

Authors: Mirza Waseem Abbas, Syed Danish Raza

Abstract:

For many countries agriculture is still the major force driving the economy and a critically important socioeconomic sector, despite exceptional industrial development across the globe. In countries like Pakistan, this sector is considered the backbone of the economy, and most of the economic decision making revolves around agricultural outputs and data. Timely and accurate facts and figures about this vital sector hold immense significance and have serious implications for the long-term development of the economy. Therefore, any significant improvements in the statistics and other forms of data regarding agriculture sector are considered important by all policymakers. This is especially true for decision making for the betterment of crops and the agriculture sector in general. Provincial and federal agricultural departments collect data for all cash and non-cash crops and the sector, in general, every year. Traditional data collection for such a large sector i.e. agriculture, being time-consuming, prone to human error and labor-intensive, is slowly but gradually being replaced by remote sensing techniques. For this study, remotely sensed data were used for change detection (machine learning, supervised & unsupervised classification) to assess the increase or decrease in area under agriculture over the last fifteen years due to urbanization. Detailed Landsat Images for the selected agricultural districts were acquired for the year 2000 and compared to images of the same area acquired for the year 2016. Observed differences validated through detailed analysis of the areas show that there was a considerable decrease in vegetation during the last fifteen years in four major agricultural districts of the Punjab province due to urbanization (housing societies).

Keywords: change detection, area estimation, machine learning, urbanization, remote sensing

Procedia PDF Downloads 239
34325 Sourcing and Compiling a Maltese Traffic Dataset MalTra

Authors: Gabriele Borg, Alexei De Bono, Charlie Abela

Abstract:

There on a constant rise in the availability of high volumes of data gathered from multiple sources, resulting in an abundance of unprocessed information that can be used to monitor patterns and trends in user behaviour. Similarly, year after year, Malta is also constantly experiencing ongoing population growth and an increase in mobilization demand. This research takes advantage of data which is continuously being sourced and converting it into useful information related to the traffic problem on the Maltese roads. The scope of this paper is to provide a methodology to create a custom dataset (MalTra - Malta Traffic) compiled from multiple participants from various locations across the island to identify the most common routes taken to expose the main areas of activity. This use of big data is seen being used in various technologies and is referred to as ITSs (Intelligent Transportation Systems), which has been concluded that there is significant potential in utilising such sources of data on a nationwide scale.

Keywords: Big Data, vehicular traffic, traffic management, mobile data patterns

Procedia PDF Downloads 95
34324 Comparative Study of Accuracy of Land Cover/Land Use Mapping Using Medium Resolution Satellite Imagery: A Case Study

Authors: M. C. Paliwal, A. K. Jain, S. K. Katiyar

Abstract:

Classification of satellite imagery is very important for the assessment of its accuracy. In order to determine the accuracy of the classified image, usually the assumed-true data are derived from ground truth data using Global Positioning System. The data collected from satellite imagery and ground truth data is then compared to find out the accuracy of data and error matrices are prepared. Overall and individual accuracies are calculated using different methods. The study illustrates advanced classification and accuracy assessment of land use/land cover mapping using satellite imagery. IRS-1C-LISS IV data were used for classification of satellite imagery. The satellite image was classified using the software in fourteen classes namely water bodies, agricultural fields, forest land, urban settlement, barren land and unclassified area etc. Classification of satellite imagery and calculation of accuracy was done by using ERDAS-Imagine software to find out the best method. This study is based on the data collected for Bhopal city boundaries of Madhya Pradesh State of India.

Keywords: resolution, accuracy assessment, land use mapping, satellite imagery, ground truth data, error matrices

Procedia PDF Downloads 491
34323 Leveraging the Power of Dual Spatial-Temporal Data Scheme for Traffic Prediction

Authors: Yang Zhou, Heli Sun, Jianbin Huang, Jizhong Zhao, Shaojie Qiao

Abstract:

Traffic prediction is a fundamental problem in urban environment, facilitating the smart management of various businesses, such as taxi dispatching, bike relocation, and stampede alert. Most earlier methods rely on identifying the intrinsic spatial-temporal correlation to forecast. However, the complex nature of this problem entails a more sophisticated solution that can simultaneously capture the mutual influence of both adjacent and far-flung areas, with the information of time-dimension also incorporated seamlessly. To tackle this difficulty, we propose a new multi-phase architecture, DSTDS (Dual Spatial-Temporal Data Scheme for traffic prediction), that aims to reveal the underlying relationship that determines future traffic trend. First, a graph-based neural network with an attention mechanism is devised to obtain the static features of the road network. Then, a multi-granularity recurrent neural network is built in conjunction with the knowledge from a grid-based model. Subsequently, the preceding output is fed into a spatial-temporal super-resolution module. With this 3-phase structure, we carry out extensive experiments on several real-world datasets to demonstrate the effectiveness of our approach, which surpasses several state-of-the-art methods.

Keywords: traffic prediction, spatial-temporal, recurrent neural network, dual data scheme

Procedia PDF Downloads 101
34322 Visible-Light-Driven OVs-BiOCl Nanoplates with Enhanced Photocatalytic Activity toward NO Oxidation

Authors: Jiazhen Liao, Xiaolan Zeng

Abstract:

A series of BiOCl nanoplates with different oxygen vacancies (OVs) concentrations were successfully synthesized via a facile solvothermal method. The concentration of OVs of BiOCl can be tuned by the ratios of water/ethylene glycol. Such nanoplates containing oxygen vacancies served as an efficient visible-light-driven photocatalyst for NO oxidation. Compared with pure BiOCl, the enhanced photocatalytic performance was mainly attributed to the introduction of OVs, which greatly enhanced light absorption, promoted electron transfer, activated oxygen molecules. The present work could provide insights into the understanding of the role of OVs in photocatalysts for reference. Combined with characterization analysis, such as XRD(X-ray diffraction), XPS(X-ray photoelectron spectroscopy), TEM(Transmission Electron Microscopy), PL(Fluorescence Spectroscopy), and DFT (Density Functional Theory) calculations, the effect of vacancies on photoelectrochemical properties of BiOCl photocatalysts are shown. Furthermore, the possible reaction mechanisms of photocatalytic NO oxidation were also revealed. According to the results of in situ DRIFTS ( Diffused Reflectance Infrared Fourier Transform Spectroscopy), various intermediates were produced during different time intervals of NO photodegradation. The possible pathways are summarized below. First, visible light irradiation induces electron-hole pairs on the surface of OV-BOC (BiOCl with oxygen vacancies). Second, photogenerated electrons form superoxide radical with the contacted oxygen. Then, the NO molecules adsorbed on the surface of OV-BOC are attacked by superoxide radical and form nitrate instead of NO₂ (by-products). Oxygen vacancies greatly improve the photocatalytic oxidation activity of NO and effectively inhibit the production of harmful by-products during the oxidation of NO.

Keywords: OVs-BiOCl nanoplate, oxygen vacancies, NO oxidation, photocatalysis

Procedia PDF Downloads 122
34321 Autosomal Dominant Polycystic Kidney Patients May Be Predisposed to Various Cardiomyopathies

Authors: Fouad Chebib, Marie Hogan, Ziad El-Zoghby, Maria Irazabal, Sarah Senum, Christina Heyer, Charles Madsen, Emilie Cornec-Le Gall, Atta Behfar, Barbara Ehrlich, Peter Harris, Vicente Torres

Abstract:

Background: Mutations in PKD1 and PKD2, the genes encoding the proteins polycystin-1 (PC1) and polycystin-2 (PC2) cause autosomal dominant polycystic kidney disease (ADPKD). ADPKD is a systemic disease associated with several extrarenal manifestations. Animal models have suggested an important role for the polycystins in cardiovascular function. The aim of the current study is to evaluate the association of various cardiomyopathies in a large cohort of patients with ADPKD. Methods: Clinical data was retrieved from medical records for all patients with ADPKD and cardiomyopathies (n=159). Genetic analysis was performed on available DNA by direct sequencing. Results: Among the 58 patients included in this case series, 39 patients had idiopathic dilated cardiomyopathy (IDCM), 17 had hypertrophic obstructive cardiomyopathy (HOCM), and 2 had left ventricular noncompaction (LVNC). The mean age at cardiomyopathy diagnosis was 53.3, 59.9 and 53.5 years in IDCM, HOCM and LVNC patients respectively. The median left ventricular ejection fraction at initial diagnosis of IDCM was 25%. Average basal septal thickness was 19.9 mm in patients with HOCM. Genetic data was available in 19, 8 and 2 cases of IDCM, HOCM, and LVNC respectively. PKD1 mutations were detected in 47.4%, 62.5% and 100% of IDCM, HOCM and LVNC cases. PKD2 mutations were detected only in IDCM cases and were overrepresented (36.8%) relative to the expected frequency in ADPKD (~15%). The prevalence of IDCM, HOCM, and LVNC in our ADPKD clinical cohort was 1:17, 1:39 and 1:333 respectively. When compared to the general population, IDCM and HOCM was approximately 10-fold more prevalent in patients with ADPKD. Conclusions: In summary, we suggest that PKD1 or PKD2 mutations may predispose to idiopathic dilated or hypertrophic cardiomyopathy. There is a trend for patients with PKD2 mutations to develop the former and for patients with PKD1 mutations to develop the latter. Predisposition to various cardiomyopathies may be another extrarenal manifestation of ADPKD.

Keywords: autosomal dominant polycystic kidney (ADPKD), polycystic kidney disease, cardiovascular, cardiomyopathy, idiopathic dilated cardiomyopathy, hypertrophic cardiomyopathy, left ventricular noncompaction

Procedia PDF Downloads 296
34320 Seismological Studies in Some Areas in Egypt

Authors: Gamal Seliem, Hassan Seliem

Abstract:

Aswan area is one of the important areas in Egypt and because it encompasses the vital engineering structure of the High dam, so it has been selected for the present study. The study of the crustal deformation and gravity associated with earthquake activity in the High Dam area of great importance for the safety of the High Dam and its economic resources. This paper deals with using micro-gravity, precise leveling and GPS data for geophysical and geodetically studies. For carrying out the detailed gravity survey in the area, were established for studying the subsurface structures. To study the recent vertical movements, a profile of 10 km length joins the High Dam and Aswan old dam were established along the road connecting the two dams. This profile consists of 35 GPS/leveling stations extending along the two sides of the road and on the High Dam body. Precise leveling was carried out with GPS and repeated micro-gravity survey in the same time. GPS network consisting of nine stations was established for studying the recent crustal movements. Many campaigns from December 2001 to December 2014 were performed for collecting the gravity, leveling and GPS data. The main aim of this work is to study the structural features and the behavior of the area, as depicted from repeated micro-gravity, precise leveling and GPS measurements. The present work focuses on the analysis of the gravity, leveling and GPS data. The gravity results of the present study investigate and analyze the subsurface geologic structures and reveal to there be minor structures; features and anomalies are taking W-E and N-S directions. The geodetic results indicated lower rates of the vertical and horizontal displacements and strain values. This may be related to the stability of the area.

Keywords: repeated micro-gravity changes, precise leveling, GPS data, Aswan High Dam

Procedia PDF Downloads 433
34319 Time-Domain Nuclear Magnetic Resonance as a Potential Analytical Tool to Assess Thermisation in Ewe's Milk

Authors: Alessandra Pardu, Elena Curti, Marco Caredda, Alessio Dedola, Margherita Addis, Massimo Pes, Antonio Pirisi, Tonina Roggio, Sergio Uzzau, Roberto Anedda

Abstract:

Some of the artisanal cheeses products of European Countries certificated as PDO (Protected Designation of Origin) are made from raw milk. To recognise potential frauds (e.g. pasteurisation or thermisation of milk aimed at raw milk cheese production), the alkaline phosphatase (ALP) assay is currently applied only for pasteurisation, although it is known to have notable limitations for the validation of ALP enzymatic state in nonbovine milk. It is known that frauds considerably impact on customers and certificating institutions, sometimes resulting in a damage of the product image and potential economic losses for cheesemaking producers. Robust, validated, and univocal analytical methods are therefore needed to allow Food Control and Security Organisms, to recognise a potential fraud. In an attempt to develop a new reliable method to overcome this issue, Time-Domain Nuclear Magnetic Resonance (TD-NMR) spectroscopy has been applied in the described work. Daily fresh milk was analysed raw (680.00 µL in each 10-mm NMR glass tube) at least in triplicate. Thermally treated samples were also produced, by putting each NMR tube of fresh raw milk in water pre-heated at temperatures from 68°C up to 72°C and for up to 3 min, with continuous agitation, and quench-cooled to 25°C in a water and ice solution. Raw and thermally treated samples were analysed in terms of 1H T2 transverse relaxation times with a CPMG sequence (Recycle Delay: 6 s, interpulse spacing: 0.05 ms, 8000 data points) and quasi-continuous distributions of T2 relaxation times were obtained by CONTIN analysis. In line with previous data collected by high field NMR techniques, a decrease in the spin-spin relaxation constant T2 of the predominant 1H population was detected in heat-treated milk as compared to raw milk. The decrease of T2 parameter is consistent with changes in chemical exchange and diffusive phenomena, likely associated to changes in milk protein (i.e. whey proteins and casein) arrangement promoted by heat treatment. Furthermore, experimental data suggest that molecular alterations are strictly dependent on the specific heat treatment conditions (temperature/time). Such molecular variations in milk, which are likely transferred to cheese during cheesemaking, highlight the possibility to extend the TD-NMR technique directly on cheese to develop a method for assessing a fraud related to the use of a milk thermal treatment in PDO raw milk cheese. Results suggest that TDNMR assays might pave a new way to the detailed characterisation of heat treatments of milk.

Keywords: cheese fraud, milk, pasteurisation, TD-NMR

Procedia PDF Downloads 227
34318 Effect of Genuine Missing Data Imputation on Prediction of Urinary Incontinence

Authors: Suzan Arslanturk, Mohammad-Reza Siadat, Theophilus Ogunyemi, Ananias Diokno

Abstract:

Missing data is a common challenge in statistical analyses of most clinical survey datasets. A variety of methods have been developed to enable analysis of survey data to deal with missing values. Imputation is the most commonly used among the above methods. However, in order to minimize the bias introduced due to imputation, one must choose the right imputation technique and apply it to the correct type of missing data. In this paper, we have identified different types of missing values: missing data due to skip pattern (SPMD), undetermined missing data (UMD), and genuine missing data (GMD) and applied rough set imputation on only the GMD portion of the missing data. We have used rough set imputation to evaluate the effect of such imputation on prediction by generating several simulation datasets based on an existing epidemiological dataset (MESA). To measure how well each dataset lends itself to the prediction model (logistic regression), we have used p-values from the Wald test. To evaluate the accuracy of the prediction, we have considered the width of 95% confidence interval for the probability of incontinence. Both imputed and non-imputed simulation datasets were fit to the prediction model, and they both turned out to be significant (p-value < 0.05). However, the Wald score shows a better fit for the imputed compared to non-imputed datasets (28.7 vs. 23.4). The average confidence interval width was decreased by 10.4% when the imputed dataset was used, meaning higher precision. The results show that using the rough set method for missing data imputation on GMD data improve the predictive capability of the logistic regression. Further studies are required to generalize this conclusion to other clinical survey datasets.

Keywords: rough set, imputation, clinical survey data simulation, genuine missing data, predictive index

Procedia PDF Downloads 151
34317 Ecotourism Development as an Alternative Livelihood for Guassa Community, Ethiopia

Authors: Abraham Kidane

Abstract:

The study aims at assessing the prospects and challenges of community-based ecotourism development in and around the Guassa Community Conservation Area (GCCA) for the establishment of alternative sources of livelihood for local people and the conservation of natural resources. The Guassa area and its surrounding area are endowed with natural, cultural, and religious tourism resources. The study is descriptive in its design and uses both qualitative and quantitative research methods. Interviews and questionnaires were used as an instrument for data gathering. The interview was undertaken with government officials, NGO officials, and experts, with three local community representatives. The three Kebeles of Guassa were chosen using purposive sampling because of the fact that they are immediate neighbors to GCCA, and hence, 150 questionnaires were administered proportionally to the household numbers in each kebeles. The perspectives of the MoCT, EWCA, and some Tour Operation agencies were uncovered through questionnaires; for each of them, five questionnaires were administered, and all the returns were used in the analysis. Frequency, percentage, average mean, One Way-ANOVA, and independent t-test are used to analyze quantitative data. The findings revealed that food insecurity is commonplace in the study area. The local people's reliance on the conservation area’s resources has been increasing, and the area size is also dwindling from time to time. On the other hand, the local people's levels of awareness about Community-Based Ecotourism (CBET) are low. In addition, the local capacity in relation to conservation and CBET development is also low, even though there is inadequate training offered by the government and NGOs. In general, tourism is not yet considered an alternative source of income and a means of conserving natural resources. In addition, the challenges for CBET development apart from low awareness level about CBET and low capacity, poor infrastructure, and poor tourism facilities were also identified as challenges in the study area.

Keywords: ecotourism, CBET, alternative livelihood, conservation

Procedia PDF Downloads 85
34316 Implementation of Building Information Modelling to Monitor, Assess, and Control the Indoor Environmental Quality of Higher Education Buildings

Authors: Mukhtar Maigari

Abstract:

The landscape of Higher Education (HE) institutions, especially following the CVID-19 pandemic, necessitates advanced approaches to manage Indoor Environmental Quality (IEQ) which is crucial for the comfort, health, and productivity of students and staff. This study investigates the application of Building Information Modelling (BIM) as a multifaceted tool for monitoring, assessing, and controlling IEQ in HE buildings aiming to bridge the gap between traditional management practices and the innovative capabilities of BIM. Central to the study is a comprehensive literature review, which lays the foundation by examining current knowledge and technological advancements in both IEQ and BIM. This review sets the stage for a deeper investigation into the practical application of BIM in IEQ management. The methodology consists of Post-Occupancy Evaluation (POE) which encompasses physical monitoring, questionnaire surveys, and interviews under the umbrella of case studies. The physical data collection focuses on vital IEQ parameters such as temperature, humidity, CO2 levels etc, conducted by using different equipment including dataloggers to ensure accurate data. Complementing this, questionnaire surveys gather perceptions and satisfaction levels from students, providing valuable insights into the subjective aspects of IEQ. The interview component, targeting facilities management teams, offers an in-depth perspective on IEQ management challenges and strategies. The research delves deeper into the development of a conceptual BIM-based framework, informed by the insight findings from case studies and empirical data. This framework is designed to demonstrate the critical functions necessary for effective IEQ monitoring, assessment, control and automation with real time data handling capabilities. This BIM-based framework leads to the developing and testing a BIM-based prototype tool. This prototype leverages on software such as Autodesk Revit with its visual programming tool i.e., Dynamo and an Arduino-based sensor network thereby allowing for real-time flow of IEQ data for monitoring, control and even automation. By harnessing the capabilities of BIM technology, the study presents a forward-thinking approach that aligns with current sustainability and wellness goals, particularly vital in the post-COVID-19 era. The integration of BIM in IEQ management promises not only to enhance the health, comfort, and energy efficiency of educational environments but also to transform them into more conducive spaces for teaching and learning. Furthermore, this research could influence the future of HE buildings by prompting universities and government bodies to revaluate and improve teaching and learning environments. It demonstrates how the synergy between IEQ and BIM can empower stakeholders to monitor IEQ conditions more effectively and make informed decisions in real-time. Moreover, the developed framework has broader applications as well; it can serve as a tool for other sustainability assessments, like energy analysis in HE buildings, leveraging measured data synchronized with the BIM model. In conclusion, this study bridges the gap between theoretical research and real-world application by practicalizing how advanced technologies like BIM can be effectively integrated to enhance environmental quality in educational institutions. It portrays the potential of integrating advanced technologies like BIM in the pursuit of improved environmental conditions in educational institutions.

Keywords: BIM, POE, IEQ, HE-buildings

Procedia PDF Downloads 39
34315 True Detective as a Southern Gothic: A Study of Its Music-Lyrics

Authors: Divya Sharma

Abstract:

Nic Pizzolatto’s True Detective offers profound mythological and philosophical ramblings for audiences with literary sensibilities. An American Sothern Gothic with its bayon landscape of the Gulf Coast of Louisiana, where two detectives Rustin Cohle and Martin Hart begin investigating the isolated murder of Dora Lange, only to discover an entrenched network of perversion and corruption, offers an existential outlook. The proposed research paper shall attempt to investigate the pervasive themes of gothic and existentialism in the music of the first season of the series.

Keywords: gothic, music, existentialism, mythology, philosophy

Procedia PDF Downloads 490
34314 A User Interface for Easiest Way Image Encryption with Chaos

Authors: D. López-Mancilla, J. M. Roblero-Villa

Abstract:

Since 1990, the research on chaotic dynamics has received considerable attention, particularly in light of potential applications of this phenomenon in secure communications. Data encryption using chaotic systems was reported in the 90's as a new approach for signal encoding that differs from the conventional methods that use numerical algorithms as the encryption key. The algorithms for image encryption have received a lot of attention because of the need to find security on image transmission in real time over the internet and wireless networks. Known algorithms for image encryption, like the standard of data encryption (DES), have the drawback of low level of efficiency when the image is large. The encrypting based on chaos proposes a new and efficient way to get a fast and highly secure image encryption. In this work, a user interface for image encryption and a novel and easiest way to encrypt images using chaos are presented. The main idea is to reshape any image into a n-dimensional vector and combine it with vector extracted from a chaotic system, in such a way that the vector image can be hidden within the chaotic vector. Once this is done, an array is formed with the original dimensions of the image and turns again. An analysis of the security of encryption from the images using statistical analysis is made and is used a stage of optimization for image encryption security and, at the same time, the image can be accurately recovered. The user interface uses the algorithms designed for the encryption of images, allowing you to read an image from the hard drive or another external device. The user interface, encrypt the image allowing three modes of encryption. These modes are given by three different chaotic systems that the user can choose. Once encrypted image, is possible to observe the safety analysis and save it on the hard disk. The main results of this study show that this simple method of encryption, using the optimization stage, allows an encryption security, competitive with complicated encryption methods used in other works. In addition, the user interface allows encrypting image with chaos, and to submit it through any public communication channel, including internet.

Keywords: image encryption, chaos, secure communications, user interface

Procedia PDF Downloads 471
34313 Anaerobic Digestion Batch Study of Taxonomic Variations in Microbial Communities during Adaptation of Consortium to Different Lignocellulosic Substrates Using Targeted Sequencing

Authors: Priyanka Dargode, Suhas Gore, Manju Sharma, Arvind Lali

Abstract:

Anaerobic digestion has been widely used for production of methane from different biowastes. However, the complexity of microbial communities involved in the process is poorly understood. The performance of biogas production process concerning the process productivity is closely coupled to its microbial community structure and syntrophic interactions amongst the community members. The present study aims at understanding taxonomic variations occurring in any starter inoculum when acclimatised to different lignocellulosic biomass (LBM) feedstocks relating to time of digestion. The work underlines use of high throughput Next Generation Sequencing (NGS) for validating the changes in taxonomic patterns of microbial communities. Biomethane Potential (BMP) batches were set up with different pretreated and non-pretreated LBM residues using the same microbial consortium and samples were withdrawn for studying the changes in microbial community in terms of its structure and predominance with respect to changes in metabolic profile of the process. DNA of samples withdrawn at different time intervals with reference to performance changes of the digestion process, was extracted followed by its 16S rRNA amplicon sequencing analysis using Illumina Platform. Biomethane potential and substrate consumption was monitored using Gas Chromatography(GC) and reduction in COD (Chemical Oxygen Demand) respectively. Taxonomic analysis by QIIME server data revealed that microbial community structure changes with different substrates as well as at different time intervals. It was observed that biomethane potential of each substrate was relatively similar but, the time required for substrate utilization and its conversion to biomethane was different for different substrates. This could be attributed to the nature of substrate and consequently the discrepancy between the dominance of microbial communities with regards to different substrate and at different phases of anaerobic digestion process. Knowledge of microbial communities involved would allow a rational substrate specific consortium design which will help to reduce consortium adaptation period and enhance the substrate utilisation resulting in improved efficacy of biogas process.

Keywords: amplicon sequencing, biomethane potential, community predominance, taxonomic analysis

Procedia PDF Downloads 510
34312 Cardiometabolic Risk Factors Responses to Supplemental High Intensity Exercise in Middle School Children

Authors: R. M. Chandler, A. J. Stringer

Abstract:

In adults, short bursts of high-intensity exercise (intensities between 80-95% of maximum heart rates) increase cardiovascular and metabolic function without the time investment of traditional aerobic training. Similar improvements in various health indices are also becoming increasingly evident in children in countries other than the United States. In the United States, physical education programs have become shorter in length and fewer in frequency. With this in the background, it is imperative that health and physical educators delivered well-organized and focused fitness programs that can be tolerated across many different somatotypes. Perhaps the least effective lag-time in a US physical education (PE) class is the first 10 minutes, a time during which children warm up. Replacing a traditional PE warmup with a 10 min high-intensity excise protocol is a time-efficient method to impact health, leaving as much time for other PE material such as skill development, motor behavior development as possible. This supplemented 10 min high-intensity exercise increases cardiovascular function as well as induces favorable body composition changes in as little as six weeks with further enhancement throughout a semester of activity. The supplemental high-intensity exercise did not detract from the PE lesson outcomes.

Keywords: cardiovascular fitness, high intensity interval training, high intensity exercise, pediatric

Procedia PDF Downloads 127
34311 Oxidation and Reduction Kinetics of Ni-Based Oxygen Carrier for Chemical Looping Combustion

Authors: J. H. Park, R. H. Hwang, K. B. Yi

Abstract:

Carbon Capture and Storage (CCS) is one of the important technology to reduce the CO₂ emission from large stationary sources such as a power plant. Among the carbon technologies for power plants, chemical looping combustion (CLC) has attracted much attention due to a higher thermal efficiency and a lower cost of electricity. A CLC process is consists of a fuel reactor and an air reactor which are interconnected fluidized bed reactor. In the fuel reactor, an oxygen carrier (OC) is reduced by fuel gas such as CH₄, H₂, CO. And the OC is send to air reactor and oxidized by air or O₂ gas. The oxidation and reduction reaction of OC occurs between the two reactors repeatedly. In the CLC system, high concentration of CO₂ can be easily obtained by steam condensation only from the fuel reactor. It is very important to understand the oxidation and reduction characteristics of oxygen carrier in the CLC system to determine the solids circulation rate between the air and fuel reactors, and the amount of solid bed materials. In this study, we have conducted the experiment and interpreted oxidation and reduction reaction characteristics via observing weight change of Ni-based oxygen carrier using the TGA with varying as concentration and temperature. Characterizations of the oxygen carrier were carried out with BET, SEM. The reaction rate increased with increasing the temperature and increasing the inlet gas concentration. We also compared experimental results and adapted basic reaction kinetic model (JMA model). JAM model is one of the nucleation and nuclei growth models, and this model can explain the delay time at the early part of reaction. As a result, the model data and experimental data agree over the arranged conversion and time with overall variance (R²) greater than 98%. Also, we calculated activation energy, pre-exponential factor, and reaction order through the Arrhenius plot and compared with previous Ni-based oxygen carriers.

Keywords: chemical looping combustion, kinetic, nickel-based, oxygen carrier, spray drying method

Procedia PDF Downloads 194
34310 Filmic and Verbal Metafphors

Authors: Manana Rusieshvili, Rusudan Dolidze

Abstract:

This paper aims at 1) investigating the ways in which a traditional, monomodal written verbal metaphor can be transposed as a monomodal non-verbal (visual) or multimodal (aural and -visual) filmic metaphor ; 2) exploring similarities and differences in the process of encoding and decoding of monomodal and multimodal metaphors. The empiric data, on which the research is based, embrace three sources: the novel by Harry Gray ‘The Hoods’, the script of the film ‘Once Upon a Time in America’ (English version by David Mills) and the resultant film by Sergio Leone. In order to achieve the above mentioned goals, the research focuses on the following issues: 1) identification of verbal and non-verbal monomodal and multimodal metaphors in the above-mentioned sources and 2) investigation of the ways and modes the specific written monomodal metaphors appearing in the novel and the script are enacted in the film and become visual, aural or visual-aural filmic metaphors ; 3) study of the factors which play an important role in contributing to the encoding and decoding of the filmic metaphor. The collection and analysis of the data were carried out in two stages: firstly, the relevant data, i.e. the monomodal metaphors from the novel, the script and the film were identified and collected. In the second, final stage the metaphors taken from all of the three sources were analysed, compared and two types of phenomena were selected for discussion: (1) the monomodal written metaphors found in the novel and/or in the script which become monomodal visual/aural metaphors in the film; (2) the monomodal written metaphors found in the novel and/or in the script which become multimodal, filmic (visual-aural) metaphors in the film.

Keywords: encoding, decoding, filmic metaphor, multimodality

Procedia PDF Downloads 511
34309 A Review on Comparative Analysis of Path Planning and Collision Avoidance Algorithms

Authors: Divya Agarwal, Pushpendra S. Bharti

Abstract:

Autonomous mobile robots (AMR) are expected as smart tools for operations in every automation industry. Path planning and obstacle avoidance is the backbone of AMR as robots have to reach their goal location avoiding obstacles while traversing through optimized path defined according to some criteria such as distance, time or energy. Path planning can be classified into global and local path planning where environmental information is known and unknown/partially known, respectively. A number of sensors are used for data collection. A number of algorithms such as artificial potential field (APF), rapidly exploring random trees (RRT), bidirectional RRT, Fuzzy approach, Purepursuit, A* algorithm, vector field histogram (VFH) and modified local path planning algorithm, etc. have been used in the last three decades for path planning and obstacle avoidance for AMR. This paper makes an attempt to review some of the path planning and obstacle avoidance algorithms used in the field of AMR. The review includes comparative analysis of simulation and mathematical computations of path planning and obstacle avoidance algorithms using MATLAB 2018a. From the review, it could be concluded that different algorithms may complete the same task (i.e. with a different set of instructions) in less or more time, space, effort, etc.

Keywords: path planning, obstacle avoidance, autonomous mobile robots, algorithms

Procedia PDF Downloads 217
34308 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements

Authors: Henok Hailemariam, Frank Wuttke

Abstract:

Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.

Keywords: collapsible soil, dielectric permittivity, moisture content, relative subsidence

Procedia PDF Downloads 340
34307 Regression Model Evaluation on Depth Camera Data for Gaze Estimation

Authors: James Purnama, Riri Fitri Sari

Abstract:

We investigate the machine learning algorithm selection problem in the term of a depth image based eye gaze estimation, with respect to its essential difficulty in reducing the number of required training samples and duration time of training. Statistics based prediction accuracy are increasingly used to assess and evaluate prediction or estimation in gaze estimation. This article evaluates Root Mean Squared Error (RMSE) and R-Squared statistical analysis to assess machine learning methods on depth camera data for gaze estimation. There are 4 machines learning methods have been evaluated: Random Forest Regression, Regression Tree, Support Vector Machine (SVM), and Linear Regression. The experiment results show that the Random Forest Regression has the lowest RMSE and the highest R-Squared, which means that it is the best among other methods.

Keywords: gaze estimation, gaze tracking, eye tracking, kinect, regression model, orange python

Procedia PDF Downloads 524
34306 Increased Reaction and Movement Times When Text Messaging during Simulated Driving

Authors: Adriana M. Duquette, Derek P. Bornath

Abstract:

Reaction Time (RT) and Movement Time (MT) are important components of everyday life that have an effect on the way in which we move about our environment. These measures become even more crucial when an event can be caused (or avoided) in a fraction of a second, such as the RT and MT required while driving. The purpose of this study was to develop a more simple method of testing RT and MT during simulated driving with or without text messaging, in a university-aged population (n = 170). In the control condition, a randomly-delayed red light stimulus flashed on a computer interface after the participant began pressing the ‘gas’ pedal on a foot switch mat. Simple RT was defined as the time between the presentation of the light stimulus and the initiation of lifting the foot from the switch mat ‘gas’ pedal; while MT was defined as the time after the initiation of lifting the foot, to the initiation of depressing the switch mat ‘brake’ pedal. In the texting condition, upon pressing the ‘gas’ pedal, a ‘text message’ appeared on the computer interface in a dialog box that the participant typed on their cell phone while waiting for the light stimulus to turn red. In both conditions, the sequence was repeated 10 times, and an average RT (seconds) and average MT (seconds) were recorded. Condition significantly (p = .000) impacted overall RTs, as the texting condition (0.47 s) took longer than the no-texting (control) condition (0.34 s). Longer MTs were also recorded during the texting condition (0.28 s) than in the control condition (0.23 s), p = .001. Overall increases in Response Time (RT + MT) of 189 ms during the texting condition would equate to an additional 4.2 meters (to react to the stimulus and begin braking) if the participant had been driving an automobile at 80 km per hour. In conclusion, increasing task complexity due to the dual-task demand of text messaging during simulated driving caused significant increases in RT (41%), MT (23%) and Response Time (34%), thus further strengthening the mounting evidence against text messaging while driving.

Keywords: simulated driving, text messaging, reaction time, movement time

Procedia PDF Downloads 511
34305 Analysis of Radial Pulse Using Nadi-Parikshan Yantra

Authors: Ashok E. Kalange

Abstract:

Diagnosis according to Ayurveda is to find the root cause of a disease. Out of the eight different kinds of examinations, Nadi-Pariksha (pulse examination) is important. Nadi-Pariksha is done at the root of the thumb by examining the radial artery using three fingers. Ancient Ayurveda identifies the health status by observing the wrist pulses in terms of 'Vata', 'Pitta' and 'Kapha', collectively called as tridosha, as the basic elements of human body and in their combinations. Diagnosis by traditional pulse analysis – NadiPariksha - requires a long experience in pulse examination and a high level of skill. The interpretation tends to be subjective, depending on the expertise of the practitioner. Present work is part of the efforts carried out in making Nadi-Parikshan objective. Nadi Parikshan Yantra (three point pulse examination system) is developed in our laboratory by using three pressure sensors (one each for the Vata, Pitta and Kapha points on radial artery). The radial pulse data was collected of a large number of subjects. The radial pulse data collected is analyzed on the basis of relative amplitudes of the three point pulses as well as in frequency and time domains. The same subjects were examined by Ayurvedic physician (Nadi Vaidya) and the dominant Dosha - Vata, Pitta or Kapha - was identified. The results are discussed in details in the paper.

Keywords: Nadi Parikshan Yantra, Tridosha, Nadi Pariksha, human pulse data analysis

Procedia PDF Downloads 180
34304 Analysing the Permanent Deformation of Cohesive Subsoil Subject to Long Term Cyclic Train Loading

Authors: Natalie M. Wride, Xueyu Geng

Abstract:

Subgrade soils of railway infrastructure are subjected to a significant number of load applications over their design life. The use of slab track on existing and future proposed rail links requires a reduced maintenance and repair regime for the embankment subgrade, due to restricted access to the subgrade soils for remediation caused by cyclic deformation. It is, therefore, important to study the deformation behaviour of soft cohesive subsoils induced as a result of long term cyclic loading. In this study, a series of oedometer tests and cyclic triaxial tests (10,000 cycles) have been undertaken to investigate the undrained deformation behaviour of soft kaolin. X-ray Computer Tomography (CT) scanning of the samples has been performed to determine the change in porosity and soil structure density from the sample microstructure as a result of the laboratory testing regime undertaken. Combined with the examination of excess pore pressures and strains obtained from the cyclic triaxial tests, the results are compared with an existing analytical solution for long term settlement considering repeated low amplitude loading. Modifications to the analytical solution are presented based on the laboratory analysis that shows good agreement with further test data.

Keywords: creep, cyclic loading, deformation, long term settlement, train loading

Procedia PDF Downloads 283
34303 An Entrepreneurial Culture Led by Creativity and Innovation: Challenges and Competencies for Sri Lanka as a Middle Income Country

Authors: Tissa Ravinda Perera

Abstract:

An open economic policy was introduced by Sri Lanka in 1977, before many other countries in Asia to align her economy to world economic trends and it was affected indigenous businesses since they had to compete with foreign products, processes, technology, innovations and businesses. The year 2010 was a milestone in Sri Lankan history to achieve the developmental goals when Foxbuisness rated Sri Lanka as the best performing global economy. However, Sri Lanka missed her chances of achieving development with the political and social chaos, consequent the regime change in 2015. This paper argues that to support the development of the country, Sri Lanka must develop an entrepreneurial culture. In this endeavor, creativity and innovation will play a pivotal role to achieve the desired level of development. In this study, it was used secondary data from various local and international sources to understand and explore the existing scenario of Sri Lankan economy, state of entrepreneurial culture and innovation, and challenges and competencies for the development of an entrepreneurial culture in Sri Lanka. The data was collected from secondary sources were depicted in tables in this paper in a meaningful manner. Based on the tables many findings were aroused and conclusions were made to support the argument in this paper. This paper revealed that the development of an entrepreneurial culture has to be associated with creativity and innovation to gain a competitive advantage over the development strategies of other countries. It is exposed that an entrepreneurial culture will help minorities, women and underprivileged societies to empower themselves. This product will help to confront and manage youth unrest which has created anarchy in the country from time to time. Throughout this paper, it was highlighted the past, present and future scenario of Sri Lankan economy along with modification to be done to it through the development of an entrepreneur culture in light of innovation and creativity to achieve the desired level of development.

Keywords: economy, industry, creativity, innovation, entrepreneurship, entrepreneurial culture

Procedia PDF Downloads 166
34302 Time Fetching Water and Maternal Childcare Practices: Comparative Study of Women with Children Living in Ethiopia and Malawi

Authors: Davod Ahmadigheidari, Isabel Alvarez, Kate Sinclair, Marnie Davidson, Patrick Cortbaoui, Hugo Melgar-Quiñonez

Abstract:

The burden of collecting water tends to disproportionately fall on women and girls in low-income countries. Specifically, women spend between one to eight hours per day fetching water for domestic use in Sub-Saharan Africa. While there has been research done on the global time burden for collecting water, it has been mainly focused on water quality parameters; leaving the relationship between water fetching and health outcomes understudied. There is little available evidence regarding the relationship between water fetching and maternal child care practices. The main objective of this study was to help fill the aforementioned gap in the literature. Data from two surveys in Ethiopia and Malawi conducted by CARE Canada in 2016-2017 were used. Descriptive statistics indicate that women were predominantly responsible for collecting water in both Ethiopia (87%) and Malawi (99%) respectively, with the majority spending more than 30 minutes per day on water collection. With regards to child care practices, in both countries, breastfeeding was relatively high (77% and 82%, respectively); and treatment for malnutrition was low (15% and 8%, respectively). However, the same consistency was not found for weighing; in Ethiopia only 16% took their children for weighting in contrast to 94% in Malawi. These three practices were summed to create one variable for regressions analyses. Unadjusted logistic regression findings showed that only in Ethiopia was time fetching water significantly associated with child care practices. Once adjusted for covariates, this relationship was no longer found to be significant. Adjusted logistic regressions also showed that the factors that did influence child care practices differed slightly between the two countries. In Ethiopia, a lack of access to community water supply (OR= 0.668; P=0.010), poor attitudes towards gender equality (OR= 0.608; P=0.001), no access to land and (OR=0.603; P=0.000), significantly decreased a women’s odd of using positive childcare practices. Notably, being young women between 15-24 years (OR=2.308; P=0.017), and 25-29 (OR=2.065; P=0.028) increased probability of using positive childcare practices. Whereas in Malawi, higher maternal age, low decision-making power, significantly decreased a women’s odd of using positive childcare practices. In conclusion, this study found that even though amount of time spent by women fetching water makes a difference for childcare practices, it is not significantly related to women’s child care practices when controlling the covariates. Importantly, women’s age contributes to child care practices in Ethiopia and Malawi.

Keywords: time fetching water, community water supply, women’s child care practices, Ethiopia, Malawi

Procedia PDF Downloads 185
34301 The Event of Extreme Precipitation Occurred in the Metropolitan Mesoregion of the Capital of Para

Authors: Natasha Correa Vitória Bandeira, Lais Cordeiro Soares, Claudineia Brazil, Luciane Teresa Salvi

Abstract:

The intense rain event that occurred between February 16 and 18, 2018, in the city of Barcarena in Pará, located in the North region of Brazil, demonstrates the importance of analyzing this type of event. The metropolitan mesoregion of Belem was severely punished by rains much above the averages normally expected for that time of year; this phenomenon affected, in addition to the capital, the municipalities of Barcarena, Murucupi and Muruçambá. Resulting in a great flood in the rivers of the region, whose basins were affected with great intensity of precipitation, causing concern for the local population because in this region, there are located companies that accumulate ore tailings, and in this specific case, the dam of any of these companies, leaching the ore to the water bodies of the Murucupi River Basin. This article aims to characterize this phenomenon through a special analysis of the distribution of rainfall, using data from atmospheric soundings, satellite images, radar images and data from the GPCP (Global Precipitation Climatology Project), in addition to rainfall stations located in the study region. The results of the work demonstrated a dissociation between the data measured in the meteorological stations and the other forms of analysis of this extreme event. Monitoring carried out solely on the basis of data from pluviometric stations is not sufficient for monitoring and/or diagnosing extreme weather events, and investment by the competent bodies is important to install a larger network of pluviometric stations sufficient to meet the demand in a given region.

Keywords: extreme precipitation, great flood, GPCP, ore dam

Procedia PDF Downloads 90
34300 Multi-Objective Evolutionary Computation Based Feature Selection Applied to Behaviour Assessment of Children

Authors: F. Jiménez, R. Jódar, M. Martín, G. Sánchez, G. Sciavicco

Abstract:

Abstract—Attribute or feature selection is one of the basic strategies to improve the performances of data classification tasks, and, at the same time, to reduce the complexity of classifiers, and it is a particularly fundamental one when the number of attributes is relatively high. Its application to unsupervised classification is restricted to a limited number of experiments in the literature. Evolutionary computation has already proven itself to be a very effective choice to consistently reduce the number of attributes towards a better classification rate and a simpler semantic interpretation of the inferred classifiers. We present a feature selection wrapper model composed by a multi-objective evolutionary algorithm, the clustering method Expectation-Maximization (EM), and the classifier C4.5 for the unsupervised classification of data extracted from a psychological test named BASC-II (Behavior Assessment System for Children - II ed.) with two objectives: Maximizing the likelihood of the clustering model and maximizing the accuracy of the obtained classifier. We present a methodology to integrate feature selection for unsupervised classification, model evaluation, decision making (to choose the most satisfactory model according to a a posteriori process in a multi-objective context), and testing. We compare the performance of the classifier obtained by the multi-objective evolutionary algorithms ENORA and NSGA-II, and the best solution is then validated by the psychologists that collected the data.

Keywords: evolutionary computation, feature selection, classification, clustering

Procedia PDF Downloads 353
34299 Using Corpora in Semantic Studies of English Adjectives

Authors: Oxana Lukoshus

Abstract:

The methods of corpus linguistics, a well-established field of research, are being increasingly applied in cognitive linguistics. Corpora data are especially useful for different quantitative studies of grammatical and other aspects of language. The main objective of this paper is to demonstrate how present-day corpora can be applied in semantic studies in general and in semantic studies of adjectives in particular. Polysemantic adjectives have been the subject of numerous studies. But most of them have been carried out on dictionaries. Undoubtedly, dictionaries are viewed as one of the basic data sources, but only at the initial steps of a research. The author usually starts with the analysis of the lexicographic data after which s/he comes up with a hypothesis. In the research conducted three polysemantic synonyms true, loyal, faithful have been analyzed in terms of differences and similarities in their semantic structure. A corpus-based approach in the study of the above-mentioned adjectives involves the following. After the analysis of the dictionary data there was the reference to the following corpora to study the distributional patterns of the words under study – the British National Corpus (BNC) and the Corpus of Contemporary American English (COCA). These corpora are continually updated and contain thousands of examples of the words under research which make them a useful and convenient data source. For the purpose of this study there were no special needs regarding genre, mode or time of the texts included in the corpora. Out of the range of possibilities offered by corpus-analysis software (e.g. word lists, statistics of word frequencies, etc.), the most useful tool for the semantic analysis was the extracting a list of co-occurrence for the given search words. Searching by lemmas, e.g. true, true to, and grouping the results by lemmas have proved to be the most efficient corpora feature for the adjectives under the study. Following the search process, the corpora provided a list of co-occurrences, which were then to be analyzed and classified. Not every co-occurrence was relevant for the analysis. For example, the phrases like An enormous sense of responsibility to protect the minds and hearts of the faithful from incursions by the state was perceived to be the basic duty of the church leaders or ‘True,’ said Phoebe, ‘but I'd probably get to be a Union Official immediately were left out as in the first example the faithful is a substantivized adjective and in the second example true is used alone with no other parts of speech. The subsequent analysis of the corpora data gave the grounds for the distribution groups of the adjectives under the study which were then investigated with the help of a semantic experiment. To sum it up, the corpora-based approach has proved to be a powerful, reliable and convenient tool to get the data for the further semantic study.

Keywords: corpora, corpus-based approach, polysemantic adjectives, semantic studies

Procedia PDF Downloads 303
34298 A Large Dataset Imputation Approach Applied to Country Conflict Prediction Data

Authors: Benjamin Leiby, Darryl Ahner

Abstract:

This study demonstrates an alternative stochastic imputation approach for large datasets when preferred commercial packages struggle to iterate due to numerical problems. A large country conflict dataset motivates the search to impute missing values well over a common threshold of 20% missingness. The methodology capitalizes on correlation while using model residuals to provide the uncertainty in estimating unknown values. Examination of the methodology provides insight toward choosing linear or nonlinear modeling terms. Static tolerances common in most packages are replaced with tailorable tolerances that exploit residuals to fit each data element. The methodology evaluation includes observing computation time, model fit, and the comparison of known values to replaced values created through imputation. Overall, the country conflict dataset illustrates promise with modeling first-order interactions while presenting a need for further refinement that mimics predictive mean matching.

Keywords: correlation, country conflict, imputation, stochastic regression

Procedia PDF Downloads 107
34297 AIR SAFE: an Internet of Things System for Air Quality Management Leveraging Artificial Intelligence Algorithms

Authors: Mariangela Viviani, Daniele Germano, Simone Colace, Agostino Forestiero, Giuseppe Papuzzo, Sara Laurita

Abstract:

Nowadays, people spend most of their time in closed environments, in offices, or at home. Therefore, secure and highly livable environmental conditions are needed to reduce the probability of aerial viruses spreading. Also, to lower the human impact on the planet, it is important to reduce energy consumption. Heating, Ventilation, and Air Conditioning (HVAC) systems account for the major part of energy consumption in buildings [1]. Devising systems to control and regulate the airflow is, therefore, essential for energy efficiency. Moreover, an optimal setting for thermal comfort and air quality is essential for people’s well-being, at home or in offices, and increases productivity. Thanks to the features of Artificial Intelligence (AI) tools and techniques, it is possible to design innovative systems with: (i) Improved monitoring and prediction accuracy; (ii) Enhanced decision-making and mitigation strategies; (iii) Real-time air quality information; (iv) Increased efficiency in data analysis and processing; (v) Advanced early warning systems for air pollution events; (vi) Automated and cost-effective m onitoring network; and (vii) A better understanding of air quality patterns and trends. We propose AIR SAFE, an IoT-based infrastructure designed to optimize air quality and thermal comfort in indoor environments leveraging AI tools. AIR SAFE employs a network of smart sensors collecting indoor and outdoor data to be analyzed in order to take any corrective measures to ensure the occupants’ wellness. The data are analyzed through AI algorithms able to predict the future levels of temperature, relative humidity, and CO₂ concentration [2]. Based on these predictions, AIR SAFE takes actions, such as opening/closing the window or the air conditioner, to guarantee a high level of thermal comfort and air quality in the environment. In this contribution, we present the results from the AI algorithm we have implemented on the first s et o f d ata c ollected i n a real environment. The results were compared with other models from the literature to validate our approach.

Keywords: air quality, internet of things, artificial intelligence, smart home

Procedia PDF Downloads 78