Search results for: nearest neighbour
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 284

Search results for: nearest neighbour

74 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs

Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa

Abstract:

Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.

Keywords: classification models, egg weight, fertilised eggs, multiple linear regression

Procedia PDF Downloads 56
73 A Machine Learning Approach for Detecting and Locating Hardware Trojans

Authors: Kaiwen Zheng, Wanting Zhou, Nan Tang, Lei Li, Yuanhang He

Abstract:

The integrated circuit industry has become a cornerstone of the information society, finding widespread application in areas such as industry, communication, medicine, and aerospace. However, with the increasing complexity of integrated circuits, Hardware Trojans (HTs) implanted by attackers have become a significant threat to their security. In this paper, we proposed a hardware trojan detection method for large-scale circuits. As HTs introduce physical characteristic changes such as structure, area, and power consumption as additional redundant circuits, we proposed a machine-learning-based hardware trojan detection method based on the physical characteristics of gate-level netlists. This method transforms the hardware trojan detection problem into a machine-learning binary classification problem based on physical characteristics, greatly improving detection speed. To address the problem of imbalanced data, where the number of pure circuit samples is far less than that of HTs circuit samples, we used the SMOTETomek algorithm to expand the dataset and further improve the performance of the classifier. We used three machine learning algorithms, K-Nearest Neighbors, Random Forest, and Support Vector Machine, to train and validate benchmark circuits on Trust-Hub, and all achieved good results. In our case studies based on AES encryption circuits provided by trust-hub, the test results showed the effectiveness of the proposed method. To further validate the method’s effectiveness for detecting variant HTs, we designed variant HTs using open-source HTs. The proposed method can guarantee robust detection accuracy in the millisecond level detection time for IC, and FPGA design flows and has good detection performance for library variant HTs.

Keywords: hardware trojans, physical properties, machine learning, hardware security

Procedia PDF Downloads 103
72 The European Pharmacy Market: The Density and its Influencing Factors

Authors: Selina Schwaabe

Abstract:

Community pharmacies deliver high-quality health care and are responsible for medication safety. During the pandemic, accessibility to the nearest pharmacy became more essential to get vaccinated against Covid-19 and to get medical aid. The government's goal is to ensure nationwide, reachable, and affordable medical health care services by pharmacies. Therefore, the density of community pharmacies matters. Overall, the density of community pharmacies is fluctuating, with slightly decreasing tendencies in some countries. So far, the literature has shown that changes in the system affect prices and density. However, a European overview of the development of the density of community pharmacies and its triggers is still missing. This research is essential to counteract against decreasing density consulting in a lack of professional health care through pharmacies. The analysis focuses on liberal versus regulated market structures, mail-order prescription drug regulation, and third-party ownership consequences. In a panel analysis, the relative influence of the measures is examined across 27 European countries over the last 21 years. In addition, the paper examines seven selected countries in depth, selected for the substantial variance in their pharmacy system: Germany, Austria, Portugal, Denmark, Sweden, Finland and Poland. Overall, the results show that regulated pharmacy markets have over 10.75 pharmacies/100.000 inhabitants more than liberal markets. Further, mail-order prescription drugs decrease the density by -17.98 pharmacies/100.000 inhabitants. Countries allowing third-party ownership have 7.67 pharmacies/100.000 inhabitants more. The results are statistically significant at a 0.001 level. The output of this analysis recommends regulated pharmacy markets, with a ban on mail-order prescription drugs allowing third-party ownership to support nationwide medical health care through community pharmacies.

Keywords: community pharmacy, market conditions, pharmacy, pharmacy market, pharmacy lobby, prescription, e-prescription, ownership structures

Procedia PDF Downloads 90
71 Adsorption of Atmospheric Gases Using Atomic Clusters

Authors: Vidula Shevade, B. J. Nagare, Sajeev Chacko

Abstract:

First principles simulation, meaning density functional theory (DFT) calculations with plane waves and pseudopotential, has become a prized technique in condensed matter theory. Nanoparticles (NP) have been known to possess good catalytic activities, especially for molecules such as CO, O₂, etc. Among the metal NPs, Aluminium based NPs are also widely known for their catalytic properties. Aluminium metal is a lightweight, excellent electrical, and thermal abundant chemical element in the earth’s crust. Aluminium NPs, when added to solid rocket fuel, help improve the combustion speed and considerably increase combustion heat and combustion stability. Adding aluminium NPs into normal Al/Al₂O₃ powder improves the sintering processes of the ceramics, with high heat transfer performance, increased density, and enhanced thermal conductivity of the sinter. We used VASP and Gaussian 0₃ package to compute the geometries, electronic structure, and bonding properties of Al₁₂Ni as well as its interaction with O₂ and CO molecules. Several MD simulations were carried out using VASP at various temperatures from which hundreds of structures were optimized, leading to 24 unique structures. These structures were then further optimized through a Gaussian package. The lowest energy structure of Al₁₂Ni has been reported to be a singlet. However, through our extensive search, we found a triplet state to be lower in energy. In our structure, the Ni atom is found to be on the surface, which gives the non-zero magnetic moment. Incidentally, O2 and CO molecules are also triplet in nature, due to which the Al₁₂-Ni cluster is likely to facilitate the oxidation process of the CO molecule. Our results show that the most favourable site for the CO molecule is the Ni atom and that for the O₂ molecule is the Al atom that is nearest to the Ni atom. Al₁₂Ni-O₂ and Al₁₂-Ni-CO structures we extracted using VMD. Al₁₂Ni nanocluster, due to in triplet electronic structure configuration, indicates it to be a potential candidate as a catalyst for oxidation of CO molecules.

Keywords: catalyst, gaussian, nanoparticles, oxidation

Procedia PDF Downloads 67
70 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network

Authors: P. Karthick, K. Mahesh

Abstract:

Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.

Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system

Procedia PDF Downloads 160
69 Cytochrome B Diversity and Phylogeny of Egyptian Sheep Breeds

Authors: Othman E. Othman, Agnés Germot, Daniel Petit, Abderrahman Maftah

Abstract:

Threats to the biodiversity are increasing due to the loss of genetic diversity within the species utilized in agriculture. Due to the progressive substitution of the less productive, locally adapted and native breeds by highly productive breeds, the number of threatened breeds is increased. In these conditions, it is more strategically important than ever to preserve as much the farm animal diversity as possible, to ensure a prompt and proper response to the needs of future generations. Mitochondrial (mtDNA) sequencing has been used to explain the origins of many modern domestic livestock species. Studies based on sequencing of sheep mitochondrial DNA showed that there are five maternal lineages in the world for domestic sheep breeds; A, B, C, D and E. Because of the eastern location of Egypt in the Mediterranean basin and the presence of fat-tailed sheep breeds- character quite common in Turkey and Syria- where genotypes that seem quite primitive, the phylogenetic studies of Egyptian sheep breeds become particularly attractive. We aimed in this work to clarify the genetic affinities, biodiversity and phylogeny of five Egyptian sheep breeds using cytochrome B sequencing. Blood samples were collected from 63 animals belonging to the five tested breeds; Barki, Rahmani, Ossimi, Saidi and Sohagi. The total DNA was extracted and the specific primer allowed the conventional PCR amplification of the cytochrome B region of mtDNA (approximately 1272 bp). PCR amplified products were purified and sequenced. The alignment of Sixty-three samples was done using BioEdit software. DnaSP 5.00 software was used to identify the sequence variation and polymorphic sites in the aligned sequences. The result showed that the presence of 34 polymorphic sites leading to the formation of 18 haplotypes. The haplotype diversity in five tested breeds ranged from 0.676 in Rahmani breed to 0.894 in Sohagi breed. The genetic distances (D) and the average number of pairwise differences (Dxy) between breeds were estimated. The lowest distance was observed between Rahmani and Saidi (D: 1.674 and Dxy: 0.00150) while the highest distance was observed between Ossimi and Sohagi (D: 5.233 and Dxy: 0.00475). Neighbour-joining (Phylogeny) tree was constructed using Mega 5.0 software. The sequences of the 63 analyzed samples were aligned with references sequences of different haplogroups. The phylogeny result showed the presence of three haplogroups (HapA, HapB and HapC) in the 63 examined samples. The other two haplogroups described in literature (HapD and HapE) were not found. The result showed that 50 out of 63 tested animals cluster with haplogroup B (79.37%) whereas 7 tested animals cluster with haplogroup A (11.11%) and 6 animals cluster with haplogroup C (9.52%). In conclusion, the phylogenetic reconstructions showed that the majority of Egyptian sheep breeds belonging to haplogroup B which is the dominant haplogroup in Eastern Mediterranean countries like Syria and Turkey. Some individuals are belonging to haplogroups A and C, suggesting that the crosses were done with other breeds for characteristic selection for growth and wool quality.

Keywords: cytochrome B, diversity, phylogheny, Egyptian sheep breeds

Procedia PDF Downloads 345
68 Assessing the Accessibility to Primary Percutaneous Coronary Intervention

Authors: Tzu-Jung Tseng, Pei-Hsuen Han, Tsung-Hsueh Lu

Abstract:

Background: Ensuring patients with ST-elevation myocardial infarction (STEMI) access to hospitals that could perform percutaneous coronary intervention (PCI) in time is an important concern of healthcare managers. One commonly used the method to assess the coverage of population access to PCI hospital is the use GIS-estimated linear distance (crow's fly distance) between the district centroid and the nearest PCI hospital. If the distance is within a given distance (such as 20 km), the entire population of that district is considered to have appropriate access to PCI. The premise of using district centroid to estimate the coverage of population resident in that district is that the people live in the district are evenly distributed. In reality, the population density is not evenly distributed within the administrative district, especially in rural districts. Fortunately, the Taiwan government released basic statistical area (on average 450 population within the area) recently, which provide us an opportunity to estimate the coverage of population access to PCI services more accurate. Objectives: We aimed in this study to compare the population covered by a give PCI hospital according to traditional administrative district versus basic statistical area. We further examined if the differences between two geographic units used would be larger in a rural area than in urban area. Method: We selected two hospitals in Tainan City for this analysis. Hospital A is in urban area, hospital B is in rural area. The population in each traditional administrative district and basic statistical area are obtained from Taiwan National Geographic Information System, Ministry of Internal Affairs. Results: Estimated population live within 20 km of hospital A and B was 1,515,846 and 323,472 according to traditional administrative district and was 1,506,325 and 428,556 according to basic statistical area. Conclusion: In urban area, the estimated access population to PCI services was similar between two geographic units. However, in rural areas, the access population would be overestimated.

Keywords: accessibility, basic statistical area, modifiable areal unit problem (MAUP), percutaneous coronary intervention (PCI)

Procedia PDF Downloads 429
67 The Role of Institutions in Community Wildlife Conservation in Zimbabwe

Authors: Herbert Ntuli, Edwin Muchapondwa

Abstract:

This study used a sample of 336 households and community level data from 30 communities around the Gonarezhou National Park in Zimbabwe to analyse the association between ability to self-organize or cooperation and institutions on one hand and the relationship between success of biodiversity outcomes and cooperation on the other hand. Using both the ordinary least squares and instrumental variables estimation with heteroskedasticity-based instruments, our results confirmed that sound institutions are indeed an important ingredient for cooperation in the respective communities and cooperation positively and significantly affects biodiversity outcomes. Group size, community level trust, the number of stakeholders and punishment were found to be important variables explaining cooperation. From a policy perspective, our results show that external enforcement of rules and regulations does not necessarily translate into sound ecological outcomes but better outcomes are attainable when punishment is rather endogenized by local communities. This seems to suggest that communities should rather be supported in such a way that robust institutions that are tailor made to suit the needs of local condition will emerge that will in turn facilitate good environmental husbandry. Cooperation, training, benefits, distance from the nearest urban canter, distance from the fence, social capital average age of household head, fence and information sharing were found to be very important variables explaining the success of biodiversity outcomes ceteris paribus. Government programmes should target capacity building in terms of institutional capacity and skills development in order to have a positive impact on biodiversity. Hence, the role of stakeholders (e.g., NGOs) in capacity building and government effort should complement each other to ensure that the necessary resources are mobilized and all communities receive the necessary training and resources.

Keywords: institutions, self-organize, common pool resources, wildlife, conservation, Zimbabwe

Procedia PDF Downloads 249
66 Cross-Country Mitigation Policies and Cross Border Emission Taxes

Authors: Massimo Ferrari, Maria Sole Pagliari

Abstract:

Pollution is a classic example of economic externality: agents who produce it do not face direct costs from emissions. Therefore, there are no direct economic incentives for reducing pollution. One way to address this market failure would be directly taxing emissions. However, because emissions are global, governments might as well find it optimal to wait let foreign countries to tax emissions so that they can enjoy the benefits of lower pollution without facing its direct costs. In this paper, we first document the empirical relation between pollution and economic output with static and dynamic regression methods. We show that there is a negative relation between aggregate output and the stock of pollution (measured as the stock of CO₂ emissions). This relationship is also highly non-linear, increasing at an exponential rate. In the second part of the paper, we develop and estimate a two-country, two-sector model for the US and the euro area. With this model, we aim at analyzing how the public sector should respond to higher emissions and what are the direct costs that these policies might have. In the model, there are two types of firms, brown firms (which produce a polluting technology) and green firms. Brown firms also produce an externality, CO₂ emissions, which has detrimental effects on aggregate output. As brown firms do not face direct costs from polluting, they do not have incentives to reduce emissions. Notably, emissions in our model are global: the stock of CO₂ in the economy affects all countries, independently from where it is produced. This simplified economy captures the main trade-off between emissions and production, generating a classic market failure. According to our results, the current level of emission reduces output by between 0.4 and 0.75%. Notably, these estimates lay in the upper bound of the distribution of those delivered by studies in the early 2000s. To address market failure, governments should step in introducing taxes on emissions. With the tax, brown firms pay a cost for polluting hence facing the incentive to move to green technologies. Governments, however, might also adopt a beggar-thy-neighbour strategy. Reducing emissions is costly, as moves production away from the 'optimal' production mix of brown and green technology. Because emissions are global, a government could just wait for the other country to tackle climate change, ripping the benefits without facing any costs. We study how this strategic game unfolds and show three important results: first, cooperation is first-best optimal from a global prospective; second, countries face incentives to deviate from the cooperating equilibria; third, tariffs on imported brown goods (the only retaliation policy in case of deviation from the cooperation equilibrium) are ineffective because the exchange rate would move to compensate. We finally study monetary policy under when costs for climate change rise and show that the monetary authority should react stronger to deviations of inflation from its target.

Keywords: climate change, general equilibrium, optimal taxation, monetary policy

Procedia PDF Downloads 128
65 Phylogenetic Analysis of Klebsiella Species from Clinical Specimens from Nelson Mandela Academic Hospital in Mthatha, South Africa

Authors: Sandeep Vasaikar, Lary Obi

Abstract:

Rapid and discriminative genotyping methods are useful for determining the clonality of the isolates in nosocomial or household outbreaks. Multilocus sequence typing (MLST) is a nucleotide sequence-based approach for characterising bacterial isolates. The genetic diversity and the clinical relevance of the drug-resistant Klebsiella isolates from Mthatha are largely unknown. For this reason, prospective, experimental study of the molecular epidemiology of Klebsiella isolates from patients being treated in Mthatha over a three-year period was analysed. Methodology: PCR amplification and sequencing of the drug-resistance-associated genes, and multilocus sequence typing (MLST) using 7 housekeeping genes mdh, pgi, infB, FusAR, phoE, gapA and rpoB were conducted. A total of 32 isolates were analysed. Results: The percentages of multidrug-resistant (MDR), extensively drug-resistance (XDR) and pandrug-resistant (PDR) isolates were; MDR 65.6 % (21) and XDR and PDR with 0 % each. In this study, K. pneumoniae was 19/32 (59.4 %). MLST results showed 22 sequence types (STs) were identified, which were further separated by Maximum Parsimony into 10 clonal complexes and 12 singletons. The most dominant group was Klebsiella pneumoniae with 23/32 (71.8 %) isolates, Klebsiella oxytoca as a second group with 2/32 (6.25 %) isolates, and a single (3.1 %) K. varricola as a third group while 6 isolates were of unknown sequences. Conclusions/significance: A phylogenetic analysis of the concatenated sequences of the 7 housekeeping genes showed that strains of K. pneumoniae form a distinct lineage within the genus Klebsiella, with K. oxytoca and K. varricola its nearest phylogenetic neighbours. With the analysis of 7 genes were determined 1 K. variicola, which was mistakenly identified as K. pneumoniae by phenotypic methods. Two misidentifications of K. oxytoca were found when phenotypic methods were used. No significant differences were observed between ESBL blaCTX-M, blaTEM and blaSHV groups in the distribution of Sequence types (STs) or Clonal complexes (CCs).

Keywords: phylogenetic analysis, phylogeny, klebsiella phylogenetic, klebsiella

Procedia PDF Downloads 331
64 The Reasons for Vegetarianism in Estonia and its Effects to Body Composition

Authors: Ülle Parm, Kata Pedamäe, Jaak Jürimäe, Evelin Lätt, Aivar Orav, Anna-Liisa Tamm

Abstract:

Vegetarianism has gained popularity across the world. It`s being chosen for multiple reasons, but among Estonians, these have remained unknown. Previously, attention to bone health and probable nutrient deficiency of vegetarians has been paid and in vegetarians lower body mass index (BMI) and blood cholesterol level has been found but the results are inconclusive. The goal was to explain reasons for choosing vegetarian diet in Estonia and impact of vegetarianism to body composition – BMI, fat percentage (fat%), fat mass (FM), and fat free mass (FFM). The study group comprised of 68 vegetarians and 103 omnivorous. The determining body composition with DXA (Hologic) was concluded in 2013. Body mass (medical electronic scale, A&D Instruments, Abingdon, UK) and height (Martin metal anthropometer to the nearest 0.1 cm) were measured and BMI calculated (kg/m2). General data (physical activity level included) was collected with questionnaires. The main reasons why vegetarianism was chosen were the healthiness of the vegetarian diet (59%) and the wish to fight for animal rights (72%) Food additives were consumed by less than half of vegetarians, more often by men. Vegetarians had lower BMI than omnivores, especially amongst men. Based on BMI classification, vegetarians were less obese than omnivores. However, there were no differences in the FM, FFM and fat percentage figures of the two groups. Higher BMI might be the cause of higher physical activity level among omnivores compared with vegetarians. For classifying people as underweight, normal weight, overweight and obese both BMI and fat% criteria were used. By BMI classification in comparison with fat%, more people in the normal weight group were considered; by using fat% in comparison with BMI classification, however, more people categorized as overweight. It can be concluded that the main reasons for vegetarianism chosen in Estonia are healthiness of the vegetarian diet and the wish to fight for animal rights and vegetarian diet has no effect on body fat percentage, FM and FFM.

Keywords: body composition, body fat percentage, body mass index, vegetarianism

Procedia PDF Downloads 385
63 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India

Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit

Abstract:

Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.

Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique

Procedia PDF Downloads 101
62 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death

Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar

Abstract:

In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.

Keywords: early stage prediction, heart rate variability, linear and non-linear analysis, sudden cardiac death

Procedia PDF Downloads 313
61 The Motivation System Development: Case-Study of the Trade Metal Company in Russian Federation

Authors: Elena V. Lysenko

Abstract:

Motivating as the leading function of a modern Human Resources Management involves issues of increasing the effectiveness of the organization in a broader context. During the formation of motivational systems, the top-management of organization should pay equal attention to both external motivation (incentive system) and internal (self-motivation). The balance of internal and external motivation harmonizes the relations between employers and employees, increases the level of job satisfaction by the organization staff, which in turn leads the organization to success and ensures the organization`s profitability and competitiveness in the market environment. The article is devoted to the study of personnel motivation system in the small metal trade company, which is located in Yekaterinburg, Russian Federation. The study took place during November-December, 2016 ordered by the Company Director to analyze the motivational potential of work (managerial aspect of motivation) and motivation of personnel (personnel aspect of motivation) with the purpose to construct a system of employees’ motivation. The research tools included 6 specially selected tests of motivation, which are: “Motivation profile of your job”, “Constructive motivational attitudes”, Tests about Motivation of achievements (1st variant: Test by А.Mehrabian by the theory of D.С.McClelland and 2nd variant: Test about leading needs according with the theory of D.С.MacClelland), Tests by T.Elers (1st variant: “Determination of the motivation towards success or to avoid failure” and 2nd variant: “Trends to achieve results or to avoid failure”). The results of the study showed only one, but fundamental problem of the whole organization: high level of both motivational potential in work and self-motivation, especially in terms of achievement motivation, but serious lack of productivity. According the results which study showed this problem is derived from insufficient staff competence. The research suggests basic guidelines in order to build the new personnel motivation system for this Company, which is planned to be developed in the nearest future.

Keywords: incentive system, motivation of achievements, motivation system, self-motivation

Procedia PDF Downloads 279
60 A Location-Based Search Approach According to Users’ Application Scenario

Authors: Shih-Ting Yang, Chih-Yun Lin, Ming-Yu Li, Jhong-Ting Syue, Wei-Ming Huang

Abstract:

Global positioning system (GPS) has become increasing precise in recent years, and the location-based service (LBS) has developed rapidly. Take the example of finding a parking lot (such as Parking apps). The location-based service can offer immediate information about a nearby parking lot, including the information about remaining parking spaces. However, it cannot provide expected search results according to the requirement situations of users. For that reason, this paper develops a “Location-based Search Approach according to Users’ Application Scenario” according to the location-based search and demand determination to help users obtain the information consistent with their requirements. The “Location-based Search Approach based on Users’ Application Scenario” of this paper consists of one mechanism and three kernel modules. First, in the Information Pre-processing Mechanism (IPM), this paper uses the cosine theorem to categorize the locations of users. Then, in the Information Category Evaluation Module (ICEM), the kNN (k-Nearest Neighbor) is employed to classify the browsing records of users. After that, in the Information Volume Level Determination Module (IVLDM), this paper makes a comparison between the number of users’ clicking the information at different locations and the average number of users’ clicking the information at a specific location, so as to evaluate the urgency of demand; then, the two-dimensional space is used to estimate the application situations of users. For the last step, in the Location-based Search Module (LBSM), this paper compares all search results and the average number of characters of the search results, categorizes the search results with the Manhattan Distance, and selects the results according to the application scenario of users. Additionally, this paper develops a Web-based system according to the methodology to demonstrate practical application of this paper. The application scenario-based estimate and the location-based search are used to evaluate the type and abundance of the information expected by the public at specific location, so that information demanders can obtain the information consistent with their application situations at specific location.

Keywords: data mining, knowledge management, location-based service, user application scenario

Procedia PDF Downloads 84
59 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach

Authors: James Ladzekpo

Abstract:

Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.

Keywords: diabetes, machine learning, prediction, biomarkers

Procedia PDF Downloads 13
58 The Impact of Adopting Cross Breed Dairy Cows on Households’ Income and Food Security in the Case of Dejen Woreda, Amhara Region, Ethiopia

Authors: Misganaw Chere Siferih

Abstract:

This study assessed the impact of crossbreed dairy cows on household income and food security. The study area is found in Dejen Woreda, East Gojam Zone, and Amhara region of Ethiopia. Random sampling technique was used to obtain a sample of 80 crossbreed dairy cow owners and 176 indigenous dairy cow owners. The study employed food consumption score analytical framework to measure food security status of the household. No Statistical significant mean difference is found between crossbreed owners and indigenous owners. Logistic regression was employed to investigate crossbreed dairy cow adoption determinants , the result indicates that gender, education, labor number, land size cultivated, dairy cooperatives membership, net income and food security status of the household are statistically significant independent variables, which explained the binary dependent variable, crossbreed dairy cow adoption. Propensity score matching (PSM) was employed to analyze the impact of crossbreed dairy cow owners on farmers’ income and food security. The average net income of crossbreed dairy cow owners was found to be significantly higher than indigenous dairy cow owners. Estimates of average treatment effect of the treated (ATT) indicated that crossbreed dairy cow is able to impact households’ net income by 42%, 38.5%, 30.8% and 44.5% higher in kernel, radius, nearest neighborhood and stratification matching algorithms respectively as compared to indigenous dairy cow owners. However, estimates of average treatment of the treated (ATT) suggest that being an owner of crossbreed dairy cow is not able to affect food security significantly. Thus, crossbreed dairy cow enables farmers to increase income but not their food security in the study area. Finally, the study recommended establishing dairy cooperatives and advice farmers to become a member of them, attention to promoting the impact of crossbreed dairy cows and promotion of nutrition focus projects.

Keywords: crossbreed dairy cow, net income, food security, propensity score matching

Procedia PDF Downloads 16
57 Analysis of Biomarkers Intractable Epileptogenic Brain Networks with Independent Component Analysis and Deep Learning Algorithms: A Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients

Authors: Bliss Singhal

Abstract:

Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide and 1.2 million Americans. There exist millions of pediatric patients with intractable epilepsy, a condition in which seizures fail to come under control. The occurrence of seizures can result in physical injury, disorientation, unconsciousness, and additional symptoms that could impede children's ability to participate in everyday tasks. Predicting seizures can help parents and healthcare providers take precautions, prevent risky situations, and mentally prepare children to minimize anxiety and nervousness associated with the uncertainty of a seizure. This research proposes a comprehensive framework to predict seizures in pediatric patients by evaluating machine learning algorithms on unimodal neuroimaging data consisting of electroencephalogram signals. The bandpass filtering and independent component analysis proved to be effective in reducing the noise and artifacts from the dataset. Various machine learning algorithms’ performance is evaluated on important metrics such as accuracy, precision, specificity, sensitivity, F1 score and MCC. The results show that the deep learning algorithms are more successful in predicting seizures than logistic Regression, and k nearest neighbors. The recurrent neural network (RNN) gave the highest precision and F1 Score, long short-term memory (LSTM) outperformed RNN in accuracy and convolutional neural network (CNN) resulted in the highest Specificity. This research has significant implications for healthcare providers in proactively managing seizure occurrence in pediatric patients, potentially transforming clinical practices, and improving pediatric care.

Keywords: intractable epilepsy, seizure, deep learning, prediction, electroencephalogram channels

Procedia PDF Downloads 53
56 Risk of Heatstroke Occurring in Indoor Built Environment Determined with Nationwide Sports and Health Database and Meteorological Outdoor Data

Authors: Go Iwashita

Abstract:

The paper describes how the frequencies of heatstroke occurring in indoor built environment are related to the outdoor thermal environment with big statistical data. As the statistical accident data of heatstroke, the nationwide accident data were obtained from the National Agency for the Advancement of Sports and Health (NAASH) . The meteorological database of the Japanese Meteorological Agency supplied data about 1-hour average temperature, humidity, wind speed, solar radiation, and so forth. Each heatstroke data point from the NAASH database was linked to the meteorological data point acquired from the nearest meteorological station where the accident of heatstroke occurred. This analysis was performed for a 10-year period (2005–2014). During the 10-year period, 3,819 cases of heatstroke were reported in the NAASH database for the investigated secondary/high schools of the nine Japanese representative cities. Heatstroke most commonly occurred in the outdoor schoolyard at a wet-bulb globe temperature (WBGT) of 31°C and in the indoor gymnasium during athletic club activities at a WBGT > 31°C. The determined accident ratio (number of accidents during each club activity divided by the club’s population) in the gymnasium during the female badminton club activities was the highest. Although badminton is played in a gymnasium, these WBGT results show that the risk level during badminton under hot and humid conditions is equal to that of baseball or rugby played in the schoolyard. Except sports, the high risk of heatstroke was observed in schools houses during cultural activities. The risk level for indoor environment under hot and humid condition would be equal to that for outdoor environment based on the above results of WBGT. Therefore control measures against hot and humid indoor condition were needed as installing air conditions not only schools but also residences.

Keywords: accidents in schools, club activity, gymnasium, heatstroke

Procedia PDF Downloads 192
55 Development the Sensor Lock Knee Joint and Evaluation of Its Effect on Walking and Energy Consumption in Subjects With Quadriceps Weakness

Authors: Mokhtar Arazpour

Abstract:

Objectives: Recently a new kind of stance control knee joint has been developed called the 'sensor lock.' This study aimed to develop and evaluate 'sensor lock', which could potentially solve the problems of walking parameters and gait symmetry in subjects with quadriceps weakness. Methods: Nine subjects with quadriceps weakness were enrolled in this study. A custom-made knee ankle foot orthosis (KAFO) with the same set of components was constructed for each participant. Testing began after orthotic gait training was completed with each of the KAFOs and subjects demonstrated that they could safely walk with crutches. Subjects rested 30 minutes between each trial. The 10 meters walking test is used to assess walking speed in meters/second (m/s). The total time taken to ambulate 6 meters (m) is recorded to the nearest hundredth of a second. 6 m is then divided by the total time (in seconds) taken to ambulate and recorded in m/s. The 6 Minutes Walking Test was used to assess walking endurance in this study. Participants walked around the perimeter of a set circuit for a total of six minutes. To evaluate Physiological cost index (PCI), the subjects were asked to walk using each type of KAFOs along a pre-determined 40 m rectangular walkway at their comfortable self-selected speed. A stopwatch was used to calculate the speed of walking by measuring the time between starting and stopping time and the distance walked. Results: The use of a KAFO fitted with the “sensor lock” knee joint resulted in improvements to walking speed, distance walked and physiological cost index when compared with the knee joint in lock mode. Conclusions: This study demonstrated that the use of a KAFO with the “sensor lock” knee joint could provide significant benefits for subjects with a quadriceps weakness when compared to a KAFO with the knee joint in lock mode.

Keywords: stance control knee joint, knee ankle foot orthosis, quadriceps weakness, walking, energy consumption

Procedia PDF Downloads 90
54 Maturity Classification of Oil Palm Fresh Fruit Bunches Using Thermal Imaging Technique

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Reza Ehsani, Hawa Ze Jaffar, Ishak Aris

Abstract:

Ripeness estimation of oil palm fresh fruit is important processes that affect the profitableness and salability of oil palm fruits. The adulthood or ripeness of the oil palm fruits influences the quality of oil palm. Conventional procedure includes physical grading of Fresh Fruit Bunches (FFB) maturity by calculating the number of loose fruits per bunch. This physical classification of oil palm FFB is costly, time consuming and the results may have human error. Hence, many researchers try to develop the methods for ascertaining the maturity of oil palm fruits and thereby, deviously the oil content of distinct palm fruits without the need for exhausting oil extraction and analysis. This research investigates the potential of infrared images (Thermal Images) as a predictor to classify the oil palm FFB ripeness. A total of 270 oil palm fresh fruit bunches from most common cultivar of oil palm bunches Nigresens according to three maturity categories: under ripe, ripe and over ripe were collected. Each sample was scanned by the thermal imaging cameras FLIR E60 and FLIR T440. The average temperature of each bunches were calculated by using image processing in FLIR Tools and FLIR ThermaCAM researcher pro 2.10 environment software. The results show that temperature content decreased from immature to over mature oil palm FFBs. An overall analysis-of-variance (ANOVA) test was proved that this predictor gave significant difference between underripe, ripe and overripe maturity categories. This shows that the temperature as predictors can be good indicators to classify oil palm FFB. Classification analysis was performed by using the temperature of the FFB as predictors through Linear Discriminant Analysis (LDA), Mahalanobis Discriminant Analysis (MDA), Artificial Neural Network (ANN) and K- Nearest Neighbor (KNN) methods. The highest overall classification accuracy was 88.2% by using Artificial Neural Network. This research proves that thermal imaging and neural network method can be used as predictors of oil palm maturity classification.

Keywords: artificial neural network, maturity classification, oil palm FFB, thermal imaging

Procedia PDF Downloads 322
53 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 76
52 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy

Authors: Kemal Efe Eseller, Göktuğ Yazici

Abstract:

Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.

Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing

Procedia PDF Downloads 63
51 Delineating Floodplain along the Nasia River in Northern Ghana Using HAND Contour

Authors: Benjamin K. Ghansah, Richard K. Appoh, Iliya Nababa, Eric K. Forkuo

Abstract:

The Nasia River is an important source of water for domestic and agricultural purposes to the inhabitants of its catchment. Major farming activities takes place within the floodplain of the river and its network of tributaries. The actual inundation extent of the river system is; however, unknown. Reasons for this lack of information include financial constraints and inadequate human resources as flood modelling is becoming increasingly complex by the day. Knowledge of the inundation extent will help in the assessment of risk posed by the annual flooding of the river, and help in the planning of flood recession agricultural activities. This study used a simple terrain based algorithm, Height Above Nearest Drainage (HAND), to delineate the floodplain of the Nasia River and its tributaries. The HAND model is a drainage normalized digital elevation model, which has its height reference based on the local drainage systems rather than the average mean sea level (AMSL). The underlying principle guiding the development of the HAND model is that hillslope flow paths behave differently when the reference gradient is to the local drainage network as compared to the seaward gradient. The new terrain model of the catchment was created using the NASA’s SRTM Digital Elevation Model (DEM) 30m as the only data input. Contours (HAND Contour) were then generated from the normalized DEM. Based on field flood inundation survey, historical information of flooding of the area as well as satellite images, a HAND Contour of 2m was found to best correlates with the flood inundation extent of the river and its tributaries. A percentage accuracy of 75% was obtained when the surface area created by the 2m contour was compared with surface area of the floodplain computed from a satellite image captured during the peak flooding season in September 2016. It was estimated that the flooding of the Nasia River and its tributaries created a floodplain area of 1011 km².

Keywords: digital elevation model, floodplain, HAND contour, inundation extent, Nasia River

Procedia PDF Downloads 419
50 The Link between Corporate Governance and EU Competition Law Enforcement: A Conditional Logistic Regression Analysis of the Role of Diversity, Independence and Corporate Social Responsibility

Authors: Jeroen De Ceuster

Abstract:

This study is the first empirical analysis of the link between corporate governance and European Union competition law. Although competition law enforcement is often studied through the lens of competition law, we offer an alternative perspective by looking at a number of corporate governance factor at the level of the board of directors. We find that undertakings where the Chief Executive Officer is also chairman of the board are twice as likely to violate European Union competition law. No significant relationship was found between European Union competition law infringements and gender diversity of the board, the size of the board, the percentage of directors appointed after the Chief Executive Officer, the percentage of independent directors, or the presence of corporate social responsibility (CSR) committee. This contribution is based on a 1-1 matched peer study. Our sample includes all ultimate parent companies with a board that have been sanctioned by the European Commission for either anticompetitive agreements or abuse of dominance for the period from 2004 to 2018. These companies were matched to a company with headquarters in the same country, belongs to the same industry group, is active in the European Economic Area, and is the nearest neighbor to the infringing company in terms of revenue. Our final sample includes 121 pairs. As is common with matched peer studies, we use CLR to analyze the differences within these pairs. The only statistically significant independent variable after controlling for size and performance is CEO/Chair duality. The results indicate that companies whose Chief Executive Officer also functions as chairman of the board are twice as likely to infringe European Union competition law. This is in line with the monitoring theory of the board of directors, which states that its primary function is to monitor top management. Since competition law infringements are mostly organized by management and hidden from board directors, the results suggest that a Chief Executive Officer who is also chairman is more likely to be either complicit in the infringement or less critical towards his day-to-day colleagues and thus impedes proper detection by the board of competition law infringements.

Keywords: corporate governance, competition law, board of directors, board independence, ender diversity, corporate social responisbility

Procedia PDF Downloads 97
49 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling

Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal

Abstract:

Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.

Keywords: ABET, accreditation, benchmark collection, machine learning, program educational objectives, student outcomes, supervised multi-class classification, text mining

Procedia PDF Downloads 138
48 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms

Authors: Seulki Lee, Seoung Bum Kim

Abstract:

Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.

Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process

Procedia PDF Downloads 272
47 Contamination of the Groundwater by the Flow of the Discharge in Khouribga City (Morocco) and the Danger It Presents to the Health of the Surrounding Population.

Authors: Najih Amina

Abstract:

Our study focuses on monitoring the spatial evolution of a number of physico-chemical parameters of wells waters located at different distances from the discharge of the city of Khouribga (S0 upstream station, S1, S2 et S3 are respectively located at 5.5, 7.5, 11 Km away from solid waste discharge of the city). The absence of a source of drinking water in this region involves the population to feeding on its groundwater wells. Through the results, we note that most of the analyzed parameters exceed the potable water standards from S1. At this source of water, we find that the conductivity (1290 μmScm-1; Standard 1000 μmScm-1), Total Hardness TH (67.2°F/ Standard 50° F), Ca2 + (146 mg l-1 standard 60 mg l-1), Cl- (369 mg l-1 standard 150 mg l-1), NaCl (609 mgl-1), Methyl orange alakanity “M. alk” (280 mg l-1) greatly exceed the drinking water standards. By following these parameters, it is obvious that some values have decreased in the downstream stations, while others become important. We find that the conductivity is always higher than 950 μmScm-1; the TH registers 72°F in S3; Ca 2+ is in the range of 153 mg l-1 in S3, Cl- and NaCl- reached 426 mg l-1 and 702 mg l-1 respectively in S2, M alk becomes higher and reaches 430 to 350 in S3. At the wells S2, we found that the nitrites are well beyond the standard 1.05 mg l-1. Whereas, at the control station S0, the values are lower or at the limit of drinking water standards: conductivity (452 μmScm-1), TH (34 F°), Ca2+ (68 mg l-1), Cl- (157 mg l-1), NaCl- (258 mg l-1), M alk (220 mg l-1). Thus, the diagnosis reveals the presence of a high pollution caused by the leachates of the household waste discharge and by the effluents of the sewage waste water plant (SWWP). The phenomenon of the water hardness could, also, be generated by the processes of erosion, leaching and soil infiltration in the region (phosphate layers, intercalated layers of marl and limestone), phenomenons also caused by the acidity due to this surrounding pollution. The source S1 is the nearest surrounding site of the discharge and the most affected by the phenomenon of pollution, especially, it is near to a superficial water source S’1 polluted by the effluents coming from the sewage waste water plant of the city. In the light of these data, we can deduce that the consumption of this water from S1 does not conform the standards of drinking waters, and could affect the human health.

Keywords: physico-chemical parameters, ground water wells, infiltration, leaching, pollution, leachate discharge effluent SWWP, human health.

Procedia PDF Downloads 383
46 Trend of Overweight and Obesity, Based on Population Study among School Children in North West of Iran: Implications for When to Intervene

Authors: Sakineh Nouri Saeidlou, Fatemeh Rezaiegoyjeloo, Parvin Ayremlou, Fariba Babaie

Abstract:

Introduction: Childhood overweight and obesity is a major public health problem in both developed and developing countries. Overweight and obesity in children may have severe consequences later in adolescence and adulthood. The aim of current study was to determine the prevalence trend of overweight and obesity in school-aged children from 2009 to 2011. Methods: The present study was a population-based study and conducted in three consecutive years, from 2009 to 2011. The study population included all of primary, secondary and high school children in rural and urban regions of West Azarbijan province in West-North of Iran. Body mass index (BMI), the ratio of weight to height squared [weight (kg)]/ [height (m)]2, was calculated to the nearest decimal place. Overweight and obesity were classified using CDC recommendations for age and sex: a BMI 85th–95th percentile was classified as overweight and a BMI>95th percentile was classified as obese. All statistical analyses were performed using the Excel Software. Descriptive statistics were used to characterize the sample in different time periods. The prevalence was calculated as the ratio of number present cases to a given population number in a given subgroup at a given time. Results: Overall, 165740, 145146 and 146203 school children were assessed at 2009, 2010 and 2011, respectively. Prevalence of overweight in primary school children among girls were 52.83, 86.93 and 116.36 and for boys were 57.07, 53.4 and 93.55 per 1000 person in 2009, 2010 and 2011 years ,respectively. The prevalence of obesity in secondary school children for girls were 22.26, 27.75 and 28.43 and 26.52, 25.72 and 35.85 for boys per 1000 person in 2009, 2010 and 2011, respectively, The highest prevalence of overweight was 77.58, 142.4 and 126.46 per 1000 person among primary, secondary and high school children, respectively, in 2011. The lowest prevalence of obesity was 12.52, 24.1 and 21.61 per 1000 person among primary, secondary and high school children, respectively, in 2009. Conclusion: However, the rapid increase in both obesity and overweight should have a special attention. Research on prevalence trend of overweight and obesity in children is poorly reported in Iran. So that, future studies need to follow-up on the associations between overweight and obesity with health outcomes when children develop and reach adolescence and adulthood.

Keywords: overweight, obesity, school children, prevalence trend, Iran

Procedia PDF Downloads 303
45 Rural Livelihood under a Changing Climate Pattern in the Zio District of Togo, West Africa

Authors: Martial Amou

Abstract:

This study was carried out to assess the situation of households’ livelihood under a changing climate pattern in the Zio district of Togo, West Africa. The study examined three important aspects: (i) assessment of households’ livelihood situation under a changing climate pattern, (ii) farmers’ perception and understanding of local climate change, (iii) determinants of adaptation strategies undertaken in cropping pattern to climate change. To this end, secondary sources of data, and survey data collected from 235 farmers in four villages in the study area were used. Adapted conceptual framework from Sustainable Livelihood Framework of DFID, two steps Binary Logistic Regression Model and descriptive statistics were used in this study as methodological approaches. Based on Sustainable Livelihood Approach (SLA), various factors revolving around the livelihoods of the rural community were grouped into social, natural, physical, human, and financial capital. Thus, the study came up that households’ livelihood situation represented by the overall livelihood index in the study area (34%) is below the standard average households’ livelihood security index (50%). The natural capital was found as the poorest asset (13%) and this will severely affect the sustainability of livelihood in the long run. The result from descriptive statistics and the first step regression (selection model) indicated that most of the farmers in the study area have clear understanding of climate change even though they do not have any idea about greenhouse gases as the main cause behind the issue. From the second step regression (output model) result, education, farming experience, access to credit, access to extension services, cropland size, membership of a social group, distance to the nearest input market, were found to be the significant determinants of adaptation measures undertaken in cropping pattern by farmers in the study area. Based on the result of this study, recommendations are made to farmers, policy makers, institutions, and development service providers in order to better target interventions which build, promote or facilitate the adoption of adaptation measures with potential to build resilience to climate change and then improve rural livelihood.

Keywords: climate change, rural livelihood, cropping pattern, adaptation, Zio District

Procedia PDF Downloads 299