Search results for: exact quantization rule
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1476

Search results for: exact quantization rule

216 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 118
215 Comics as an Intermediary for Media Literacy Education

Authors: Ryan C. Zlomek

Abstract:

The value of using comics in the literacy classroom has been explored since the 1930s. At that point in time researchers had begun to implement comics into daily lesson plans and, in some instances, had started the development process for comics-supported curriculum. In the mid-1950s, this type of research was cut short due to the work of psychiatrist Frederic Wertham whose research seemingly discovered a correlation between comic readership and juvenile delinquency. Since Wertham’s allegations the comics medium has had a hard time finding its way back to education. Now, over fifty years later, the definition of literacy is in mid-transition as the world has become more visually-oriented and students require the ability to interpret images as often as words. Through this transition, comics has found a place in the field of literacy education research as the shift focuses from traditional print to multimodal and media literacies. Comics are now believed to be an effective resource in bridging the gap between these different types of literacies. This paper seeks to better understand what students learn from the process of reading comics and how those skills line up with the core principles of media literacy education in the United States. In the first section, comics are defined to determine the exact medium that is being examined. The different conventions that the medium utilizes are also discussed. In the second section, the comics reading process is explored through a dissection of the ways a reader interacts with the page, panel, gutter, and different comic conventions found within a traditional graphic narrative. The concepts of intersubjective acts and visualization are attributed to the comics reading process as readers draw in real world knowledge to decode meaning. In the next section, the learning processes that comics encourage are explored parallel to the core principles of media literacy education. Each principle is explained and the extent to which comics can act as an intermediary for this type of education is theorized. In the final section, the author examines comics use in his computer science and technology classroom. He lays out different theories he utilizes from Scott McCloud’s text Understanding Comics and how he uses them to break down media literacy strategies with his students. The article concludes with examples of how comics has positively impacted classrooms around the United States. It is stated that integrating comics into the classroom will not solve all issues related to literacy education but, rather, that comics can be a powerful multimodal resource for educators looking for new mediums to explore with their students.

Keywords: comics, graphics novels, mass communication, media literacy, metacognition

Procedia PDF Downloads 268
214 Genome Sequencing, Assembly and Annotation of Gelidium Pristoides from Kenton-on-Sea, South Africa

Authors: Sandisiwe Mangali, Graeme Bradley

Abstract:

Genome is complete set of the organism's hereditary information encoded as either deoxyribonucleic acid or ribonucleic acid in most viruses. The three different types of genomes are nuclear, mitochondrial and the plastid genome and their sequences which are uncovered by genome sequencing are known as an archive for all genetic information and enable researchers to understand the composition of a genome, regulation of gene expression and also provide information on how the whole genome works. These sequences enable researchers to explore the population structure, genetic variations, and recent demographic events in threatened species. Particularly, genome sequencing refers to a process of figuring out the exact arrangement of the basic nucleotide bases of a genome and the process through which all the afore-mentioned genomes are sequenced is referred to as whole or complete genome sequencing. Gelidium pristoides is South African endemic Rhodophyta species which has been harvested in the Eastern Cape since the 1950s for its high economic value which is one motivation for its sequencing. Its endemism further motivates its sequencing for conservation biology as endemic species are more vulnerable to anthropogenic activities endangering a species. As sequencing, mapping and annotating the Gelidium pristoides genome is the aim of this study. To accomplish this aim, the genomic DNA was extracted and quantified using the Nucleospin Plank Kit, Qubit 2.0 and Nanodrop. Thereafter, the Ion Plus Fragment Library was used for preparation of a 600bp library which was then sequenced through the Ion S5 sequencing platform for two runs. The produced reads were then quality-controlled and assembled through the SPAdes assembler with default parameters and the genome assembly was quality assessed through the QUAST software. From this assembly, the plastid and the mitochondrial genomes were then sampled out using Gelidiales organellar genomes as search queries and ordered according to them using the Geneious software. The Qubit and the Nanodrop instruments revealed an A260/A280 and A230/A260 values of 1.81 and 1.52 respectively. A total of 30792074 reads were obtained and produced a total of 94140 contigs with resulted into a sequence length of 217.06 Mbp with N50 value of 3072 bp and GC content of 41.72%. A total length of 179281bp and 25734 bp was obtained for plastid and mitochondrial respectively. Genomic data allows a clear understanding of the genomic constituent of an organism and is valuable as foundation information for studies of individual genes and resolving the evolutionary relationships between organisms including Rhodophytes and other seaweeds.

Keywords: Gelidium pristoides, genome, genome sequencing and assembly, Ion S5 sequencing platform

Procedia PDF Downloads 128
213 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 38
212 Personalized Infectious Disease Risk Prediction System: A Knowledge Model

Authors: Retno A. Vinarti, Lucy M. Hederman

Abstract:

This research describes a knowledge model for a system which give personalized alert to users about infectious disease risks in the context of weather, location and time. The knowledge model is based on established epidemiological concepts augmented by information gleaned from infection-related data repositories. The existing disease risk prediction research has more focuses on utilizing raw historical data and yield seasonal patterns of infectious disease risk emergence. This research incorporates both data and epidemiological concepts gathered from Atlas of Human Infectious Disease (AHID) and Centre of Disease Control (CDC) as basic reasoning of infectious disease risk prediction. Using CommonKADS methodology, the disease risk prediction task is an assignment synthetic task, starting from knowledge identification through specification, refinement to implementation. First, knowledge is gathered from AHID primarily from the epidemiology and risk group chapters for each infectious disease. The result of this stage is five major elements (Person, Infectious Disease, Weather, Location and Time) and their properties. At the knowledge specification stage, the initial tree model of each element and detailed relationships are produced. This research also includes a validation step as part of knowledge refinement: on the basis that the best model is formed using the most common features, Frequency-based Selection (FBS) is applied. The portion of the Infectious Disease risk model relating to Person comes out strongest, with Location next, and Weather weaker. For Person attribute, Age is the strongest, Activity and Habits are moderate, and Blood type is weakest. At the Location attribute, General category (e.g. continents, region, country, and island) results much stronger than Specific category (i.e. terrain feature). For Weather attribute, Less Precise category (i.e. season) comes out stronger than Precise category (i.e. exact temperature or humidity interval). However, given that some infectious diseases are significantly more serious than others, a frequency based metric may not be appropriate. Future work will incorporate epidemiological measurements of disease seriousness (e.g. odds ratio, hazard ratio and fatality rate) into the validation metrics. This research is limited to modelling existing knowledge about epidemiology and chain of infection concepts. Further step, verification in knowledge refinement stage, might cause some minor changes on the shape of tree.

Keywords: epidemiology, knowledge modelling, infectious disease, prediction, risk

Procedia PDF Downloads 210
211 Development of Mechanisms of Value Creation and Risk Management Organization in the Conditions of Transformation of the Economy of Russia

Authors: Mikhail V. Khachaturyan, Inga A. Koryagina, Eugenia V. Klicheva

Abstract:

In modern conditions, scientific judgment of problems in developing mechanisms of value creation and risk management acquires special relevance. Formation of economic knowledge has resulted in the constant analysis of consumer behavior for all players from national and world markets. Effective mechanisms development of the demand analysis, crucial for consumer's characteristics of future production, and the risks connected with the development of this production are the main objectives of control systems in modern conditions. The modern period of economic development is characterized by a high level of globalization of business and rigidity of competition. At the same time, the considerable share of new products and services costs has a non-material intellectual nature. The most successful in Russia is the contemporary development of small innovative firms. Such firms, through their unique technologies and new approaches to process management, which form the basis of their intellectual capital, can show flexibility and succeed in the market. As a rule, such enterprises should have very variable structure excluding the tough scheme of submission and demanding essentially new incentives for inclusion of personnel in innovative activity. Realization of similar structures, as well as a new approach to management, can be constructed based on value-oriented management which is directed to gradual change of consciousness of personnel and formation from groups of adherents included in the solution of the general innovative tasks. At the same time, valuable changes can gradually capture not only innovative firm staff, but also the structure of its corporate partners. Introduction of new technologies is the significant factor contributing to the development of new valuable imperatives and acceleration of the changing values systems of the organization. It relates to the fact that new technologies change the internal environment of the organization in a way that the old system of values becomes inefficient in new conditions. Introduction of new technologies often demands change in the structure of employee’s interaction and training in their new principles of work. During the introduction of new technologies and the accompanying change in the value system, the structure of the management of the values of the organization is changing. This is due to the need to attract more staff to justify and consolidate the new value system and bring their view into the motivational potential of the new value system of the organization.

Keywords: value, risk, creation, problems, organization

Procedia PDF Downloads 258
210 Endotracheal Intubation Self-Confidence: Report of a Realistic Simulation Training

Authors: Cleto J. Sauer Jr., Rita C. Sauer, Chaider G. Andrade, Doris F. Rabelo

Abstract:

Introduction: Endotracheal Intubation (ETI) is a procedure for clinical management of patients with severe clinical presentation of COVID-19 disease. Realistic simulation (RS) is an active learning methodology utilized for clinical skill's improvement. To improve ETI skills of public health network's physicians from Recôncavo da Bahia region in Brazil, during COVID-19 outbreak, RS training was planned and carried out. Training scenario included the Nasco Lifeform realistic simulator, and three actions were simulated: ETI procedure, sedative drugs management, and bougie guide utilization. Training intervention occurred between May and June 2020, as an interinstitutional cooperation between the Health's Department of Bahia State and the Federal University from Recôncavo da Bahia. Objective: The main objective is to report the effects on participants' self-confidence perception for ETI procedure after RS based training. Methods: This is a descriptive study, with secondary data extracted from questionnaires applied throughout RS training. Priority workplace, time from last intubation, and knowledge about bougie were reported on a preparticipation questionnaire. Additionally, participants completed pre- and post-training qualitative self-assessment (10-point Likert scale) regarding self-confidence perception in performing each of simulated actions. Distribution analysis for qualitative data was performed with Wilcoxon Signed Rank Test, and self-confidence increase analysis in frequency contingency tables with Fisher's Exact Test. Results: 36 physicians participated of training, 25 (69%) from primary care setting, 25 (69%) performed ETI over a year ago, and only 4 (11%) had previous knowledge about the bougie guide utilization. There was an increase in self-confidence medians for all three simulated actions. Medians (variation) for self-confidence before and after training, for each simulated action were as follows: ETI [5 (1-9) vs. 8 (6-10) (p < 0.0001)]; Sedative drug management [5 (1-9) vs. 8 (4-10) (p < 0.0001)]; Bougie guide utilization [2.5 (1-7) vs. 8 (4-10) (p < 0.0001)]. Among those who performed ETI over a year ago (n = 25), an increase in self-confidence greater than 3 points for ETI was reported by 23 vs. 2 physicians (p = 0.0002), and by 21 vs. 4 (p = 0.03) for sedative drugs management. Conclusions: RS training contributed to self-confidence increase in performing ETI. Among participants who performed ETI over a year, there was a significant association between RS training and increase of more than 3 points in self-confidence, both for ETI and sedative drug management. Training with RS methodology is suitable for ETI confidence enhancement during COVID-19 outbreak.

Keywords: confidence, COVID-19, endotracheal intubation, realistic simulation

Procedia PDF Downloads 118
209 Unpredictable Territorial Interiority: Learning the Spatiality from the Early Space Learners

Authors: M. Mirza Y. Harahap

Abstract:

This paper explores the interiority of children’s territorialisation in domestic space context by looking at their affective relations with their surroundings. Examining its spatiality, the research focuses on the interactions that developed between the children and the things which exist in their house, specifically those which left traces, indicating the very arena of their territory. As early learners, the children whose mind and body are still in the development stage are hypothetically distinct in the way they territorialise the space. Rule, common sense and other form of common acceptances among the adults might not be relevant with their way on territorialising the space. Unpredictability-ness, inappropriateness, and unimaginableness hypothetically characterise their unique endeavour when territorialising the space. The purpose might even be insignificant, expressing their very development which unrestricted. This indicates how the interiority of children’s territorialisation in a domestic space context actually is. It would also implicate on a new way of seeing territory since territorialisation act has natural purpose: to aim the space and regard them as his/her own. Aiming to disclose the above territorialisation characteristics, this paper addresses a qualitative study which covers a comprehensive analysis as follow: 1) Collecting various territorial traces left from the children activities within their respective houses. Further within this stage, the data is categorised based on the territorial strategy and tactic. This stage would particularly result in the overall map of the children’s territorial interiority which expresses its focuses, range and ways; 2) Examining the interactions occurred between the children and the spatial elements within the house. Stressing on the affective relations, this stage revealed the immaterial aspect of the children’s territorialisation, thus disclosed the unseen spatial aspect of territorialisation; and 3) Synthesising the previous two stages. Correlating the results from the two stages would then help us to understand the children’s unpredictable, inappropriate and unimaginable territorial interiority. This would also help us to justify how the children learn the space through territorialisation act, its importance and its position in interiority conception. The discussed relation between the children and the houses that cover both its physical and imaginary entity as part of their overall dwelling space would also help us to have a better understanding towards specific spatial elements which are significant and undeniably important for children’s spatial learning process. Particularly for this last finding, it would also help us to determine what kind of spatial elements which are necessary to be existed in a house, thus help for design development purpose. Overall, the study in this paper would help us to broaden our mindset regarding the territory, dwelling, interiority and the overall interior architecture conception, promising a chance for further research within interior architecture field.

Keywords: children, interiority, relation, territory

Procedia PDF Downloads 114
208 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 119
207 Date Palm Fruits from Oman Attenuates Cognitive and Behavioral Defects and Reduces Inflammation in a Transgenic Mice Model of Alzheimer's Disease

Authors: M. M. Essa, S. Subash, M. Akbar, S. Al-Adawi, A. Al-Asmi, G. J. Guillemein

Abstract:

Transgenic (tg) mice which contain an amyloid precursor protein (APP) gene mutation, develop extracellular amyloid beta (Aβ) deposition in the brain, and severe memory and behavioral deficits with age. These mice serve as an important animal model for testing the efficacy of novel drug candidates for the treatment and management of symptoms of Alzheimer's disease (AD). Several reports have suggested that oxidative stress is the underlying cause of Aβ neurotoxicity in AD. Date palm fruits contain very high levels of antioxidants and several medicinal properties that may be useful for improving the quality of life in AD patients. In this study, we investigated the effect of dietary supplementation of Omani date palm fruits on the memory, anxiety and learning skills along with inflammation in an AD mouse model containing the double Swedish APP mutation (APPsw/Tg2576). The experimental groups of APP-transgenic mice from the age of 4 months were fed custom-mix diets (pellets) containing 2% and 4% Date palm fruits. We assessed spatial memory and learning ability, psychomotor coordination, and anxiety-related behavior in Tg and wild-type mice at the age of 4-5 months and 18-19 months using the Morris water maze test, rota rod test, elevated plus maze test, and open field test. Further, inflammatory parameters also analyzed. APPsw/Tg2576 mice that were fed a standard chow diet without dates showed significant memory deficits, increased anxiety-related behavior, and severe impairment in spatial learning ability, position discrimination learning ability and motor coordination along with increased inflammation compared to the wild type mice on the same diet, at the age of 18-19 months In contrast, PPsw/Tg2576 mice that were fed a diet containing 2% and 4% dates showed a significant improvements in memory, learning, locomotor function, and anxiety with reduced inflammatory markers compared to APPsw/Tg2576 mice fed the standard chow diet. Our results suggest that dietary supplementation with dates may slow the progression of cognitive and behavioral impairments in AD. The exact mechanism is still unclear and further extensive research needed.

Keywords: Alzheimer's disease, date palm fruits, Oman, cognitive decline, memory loss, anxiety, inflammation

Procedia PDF Downloads 402
206 Arsenic Contamination in Drinking Water Is Associated with Dyslipidemia in Pregnancy

Authors: Begum Rokeya, Rahelee Zinnat, Fatema Jebunnesa, Israt Ara Hossain, A. Rahman

Abstract:

Background and Aims: Arsenic in drinking water is a global environmental health problem, and the exposure may increase dyslipidemia and cerebrovascular diseases mortalities, most likely through causing atherosclerosis. However, the mechanism of lipid metabolism, atherosclerosis formation, arsenic exposure and impact in pregnancy is still unclear. Recent epidemiological evidences indicate close association between inorganic arsenic exposure via drinking water and Dyslipidemia. However, the exact mechanism of this arsenic-mediated increase in atherosclerosis risk factors remains enigmatic. We explore the association of the effect of arsenic on serum lipid profile in pregnant subjects. Methods: A total 200 pregnant mother screened in this study from arsenic exposed area. Our study group included 100 exposed subjects were cases and 100 Non exposed healthy pregnant were controls requited by a cross-sectional study. Clinical and anthropometric measurements were done by standard techniques. Lipidemic status was assessed by enzymatic endpoint method. Urinary As was measured by inductively coupled plasma-mass spectrometry and adjusted with specific gravity and Arsenic exposure was assessed by the level of urinary arsenic level > 100 μg/L was categorized as arsenic exposed and < 100 μg/L were categorized as non-exposed. Multivariate logistic regression and Student’s t - test was used for statistical analysis. Results: Systolic and diastolic blood pressure both were significantly higher in the Arsenic exposed pregnant subjects compared to the Non-exposed group (p<0.001). Arsenic exposed subjects had 2 times higher chance of developing hypertensive pregnancy (Odds Ratio 2.2). In parallel to the findings in Ar exposed subjects showed significantly higher proportion of triglyceride and total cholesterol and low density of lipo protein when compare to non- arsenic exposed pregnant subjects. Significant correlation of urinary arsenic level was also found with SBP, DBP, TG, T chol and serum LDL-Cholesterol. On multivariate logistic regression showed urinary arsenic had a positive association with DBP, SBP, Triglyceride and LDL-c. Conclusion: In conclusion, arsenic exposure may induce dyslipidemia like atherosclerosis through modifying reverse cholesterol transport in cholesterol metabolism. For decreasing atherosclerosis related mortality associated with arsenic, preventing exposure from environmental sources in early life is an important element.

Keywords: Arsenic Exposure, Dyslipidemia, Gestational Diabetes Mellitus, Serum lipid profile

Procedia PDF Downloads 98
205 Anti-Inflammatory Studies on Chungpye-Tang in Asthmatic Human Lung Tissue

Authors: J. H. Bang, H. J. Baek, K. I. Kim, B. J. Lee, H. J. Jung, H. J. Jang, S. K. Jung

Abstract:

Asthma is a chronic inflammatory lung disease characterized by airway hyper responsiveness (AHR), airway obstruction and airway wall remodeling responsible for significant morbidity and mortality worldwide. Genetic and environment factors may result in asthma, but there are no the exact causes of asthma. Chungpye-tang (CPT) has been prescribed as a representative aerosol agent for patients with dyspnea, cough and phlegm in the respiratory clinic at Kyung Hee Korean Medicine Hospital. This Korean herbal medicines have the effect of dispelling external pathogen and dampness pattern. CPT is composed of 4 species of herbal medicines. The 4 species of herbal medicines are Ephedrae herba, Pogostemonis(Agatachis) herba, Caryophylli flos and Zingiberis rhizoma crudus. CPT suppresses neutrophil infiltration and the production of pro-inflammatory cytokines in lipopolysaccharide (LPS)-induced acute lung injury (ALI) mouse model. Moreover, the anti-inflammatory effects of CPT on a mouse model of Chronic Obstructive Pulmonary Disease (COPD) was proved. Activation of the NF-κB has been proven that it plays an important role in inflammation via inducing transcription of pro-inflammatory genes. Over-expression of NF-κB has been believed be related to many inflammatory diseases such as arthritis, gastritis, asthma and COPD. So we firstly hypothesize whether CPT has an anti-inflammatory effect on asthmatic human airway epithelial tissue via inhibiting NF-κB pathway. In this study, CPT was extracted with distilled water for 3 hours at 100°C. After process of filtration and evaporation, it was freeze dried. And asthmatic human lung tissues were provided by MatTek Corp. We investigated the precise mechanism of the anti-inflammatory effect of CPT by western blotting analysis. We observed whether the decoction extracts could reduce NF-κB activation, COX-2 protein expression and NF-κB-mediated pro-inflammatory cytokines such as TNF-α, eotaxin, IL-4, IL-9 and IL-13 in asthmatic human lung tissue. As results of this study, there was a trend toward decreased NF-κB expression in asthmatic human airway epithelial tissue. We found that the inhibition effects of CPT on COX-2 expression was not determined. IL-9 and IL-13 secretion was significantly reduced in the asthmatic human lung tissue treated with CPT. Overall, our results indicate that CPT has an anti-inflammatory effect through blocking the signaling pathway of NF-κB, thereby CPT may be a potential remedial agent for allergic asthma.

Keywords: Chungpye-tang, allergic asthma, asthmatic human airway epithelial tissue, nuclear factor kappa B (NF-κB) pathway, COX-2

Procedia PDF Downloads 311
204 Occupational Safety and Health in the Wake of Drones

Authors: Hoda Rahmani, Gary Weckman

Abstract:

The body of research examining the integration of drones into various industries is expanding rapidly. Despite progress made in addressing the cybersecurity concerns for commercial drones, knowledge deficits remain in determining potential occupational hazards and risks of drone use to employees’ well-being and health in the workplace. This creates difficulty in identifying key approaches to risk mitigation strategies and thus reflects the need for raising awareness among employers, safety professionals, and policymakers about workplace drone-related accidents. The purpose of this study is to investigate the prevalence of and possible risk factors for drone-related mishaps by comparing the application of drones in construction with manufacturing industries. The chief reason for considering these specific sectors is to ascertain whether there exists any significant difference between indoor and outdoor flights since most construction sites use drones outside and vice versa. Therefore, the current research seeks to examine the causes and patterns of workplace drone-related mishaps and suggest possible ergonomic interventions through data collection. Potential ergonomic practices to mitigate hazards associated with flying drones could include providing operators with professional pieces of training, conducting a risk analysis, and promoting the use of personal protective equipment. For the purpose of data analysis, two data mining techniques, the random forest and association rule mining algorithms, will be performed to find meaningful associations and trends in data as well as influential features that have an impact on the occurrence of drone-related accidents in construction and manufacturing sectors. In addition, Spearman’s correlation and chi-square tests will be used to measure the possible correlation between different variables. Indeed, by recognizing risks and hazards, occupational safety stakeholders will be able to pursue data-driven and evidence-based policy change with the aim of reducing drone mishaps, increasing productivity, creating a safer work environment, and extending human performance in safe and fulfilling ways. This research study was supported by the National Institute for Occupational Safety and Health through the Pilot Research Project Training Program of the University of Cincinnati Education and Research Center Grant #T42OH008432.

Keywords: commercial drones, ergonomic interventions, occupational safety, pattern recognition

Procedia PDF Downloads 181
203 A Case Report on Anesthetic Considerations in a Neonate with Isolated Oesophageal Atresia with Radiological Fallacy

Authors: T. Rakhi, Thrivikram Shenoy

Abstract:

Esophageal atresia is a disorder of maldevelopment of esophagus with or without a connection to the trachea. Radiological reviews are needed in consultation with the pediatric surgeon and neonatologist and we report a rare case of esophageal atresia associated with atrial septal defect-patent ductus arteriosus complex. A 2-day old female baby born at term, weighing 3.010kg, admitted to the Neonatal Intensive Care Unit with respiratory distress and excessive oral secretions. On examination, continuous murmur and cyanosis were seen. Esophageal atresia was suspected, after a failed attempt to pass a nasogastric tube. Chest radiograph showed coiling of the nasogastric tube and absent gas shadow in the abdomen. Echocardiography confirmed Patent Ductus Arteriosus with Atrial Septal Defect not in failure and was diagnosed with esophageal atresia with suspected fistula posted for surgical repair. After preliminary management with oxygenation, suctioning in prone position and antibiotics, investigations revealed Hb 17gms serum biochemistry, coagulation profile and C-Reactive Protein Test normal. The baby was premedicated with 5mcg of fentanyl and 100 mcg of midazolam and a rapid awake laryngoscopy was done to rule out difficult airway followed by induction with o2 air, sevo and atracurium 2 mg. Placement of a 3.5 tube was uneventful at first attempt and after confirming bilateral air entry positioned in the lateral position for Right thoracotomy. A pulse oximeter, Echocardiogram, Non-invasive Blood Pressure, temperature and a precordial stethoscope in left axilla were essential monitors. During thoracotomy, both the ends of the esophagus and the fistula could not be located after thorough search suggesting an on table finding of type A esophageal atresia. The baby was repositioned for gastrostomy, and cervical esophagostomy ventilated overnight and extubated uneventful. Absent gas shadow was overlooked and the purpose of this presentation is to create an awareness between the neonatologist, pediatric surgeons and anesthesiologist regarding variation of typing of Tracheoesophageal fistula pre and intraoperatively. A need for imaging modalities warranted for a definitive diagnosis in the presence of a gasless stomach.

Keywords: anesthetic, atrial septal defects, esophageal atresia, patent ductus arteriosus, perioperative, chest x-ray

Procedia PDF Downloads 156
202 Effect of Sodium Arsenite Exposure on Pharmacodynamic of Meloxicam in Male Wistar Rats

Authors: Prashantkumar Waghe, N. Prakash, N. D. Prasada, L. V. Lokesh, M. Vijay Kumar, Vinay Tikare

Abstract:

Arsenic is a naturally occurring metalloid with potent toxic effects. It is ubiquitous in the environment and released from both natural and anthropogenic sources. It has the potential to cause various health hazards in exposed populations. Arsenic exposure through drinking water is considered as one of the most serious global environmental threats including Southeast Asia. The aim of present study was to evaluate the modulatory role of subacute exposure to sodium (meta) arsenite on the antinociceptive, anti-inflammatory and antipyretic responses mediated by meloxicam in rats. Rats were exposed to arsenic as sodium arsenite through drinking water for 28 days. A single dose of meloxicam (2 mg/kg b. wt.) was administered by oral gavage on the 29th day. The exact time of meloxicam administration depended on the type of test. Rats were divided randomly into 5 groups (n=6). Group I served as normal control and received arsenic free drinking water, while rats in group II were maintained similar to Group I but received meloxicam on 29th day. Groups III, IV and V were pre-exposed to arsenic through drinking water at 0.5, 5.0 and 50 ppm, respectively, for 28 days and was administered meloxicam next day and; pain and inflammation carried out by using formalin-induced nociception and carrageenan-induced inflammatory model(s), respectively by using standard protocol. For assessment of antipyretic effects, one more additional group (Group VI) was taken and given LPS @ 1.8 mg/kg b. wt. for induction of pyrexia (LPS control). Higher dose of arsenic inhibited the meloxicam mediated antinociceptive, anti-inflammatory and antipyretic responses. Further, meloxicam inhibited the arsenic induced level of tumor necrosis factor-α, inetrleukin-1β, interleukin -6 and COX2 mediated prostaglandin E2 in hind paw muscle. These results suggest a functional antagonism of meloxicam by arsenic. This may relate to arsenic mediated local release of tumor necrosis factor-α, inetrleukin-1β, interleukin -6 releases COX2 mediated prostaglandin E2. Based on the experimental study, it is concluded that sub-acute exposure to arsenic through drinking water aggravate pyrexia, inflammation and pain at environment relevant concentration and decrease the therapeutic efficacy of meloxicam at higher level of arsenite exposure. Thus, the observation made has clinical relevance in situations where animals are exposed to arsenite epidemic geographical locations.

Keywords: arsenic, analgesic activity, meloxicam, Wistar rats

Procedia PDF Downloads 162
201 Evaluating the Impact of Judicial Review of 2003 “Radical Surgery” Purging Corrupt Officials from Kenyan Courts

Authors: Charles A. Khamala

Abstract:

In 2003, constrained by an absent “rule of law culture” and negative economic growth, the new Kenyan government chose to pursue incremental judicial reforms rather than comprehensive constitutional reforms. President Mwai Kibaki’s first administration’s judicial reform strategy was two pronged. First, to implement unprecedented “radical surgery,” he appointed a new Chief Justice who instrumentally recommended that half the purportedly-corrupt judiciary should be removed by Presidential tribunals of inquiry. Second, the replacement High Court judges, initially, instrumentally-endorsed the “radical surgery’s” administrative decisions removing their corrupt predecessors. Meanwhile, retention of the welfare-reducing Constitution perpetuated declining public confidence in judicial institutions culminating in refusal by the dissatisfied opposition party to petition the disputed 2007 presidential election results, alleging biased and corrupt courts. Fatefully, widespread post-election violence ensued. Consequently, the international community prompted the second Kibaki administration to concede to a new Constitution. Suddenly, the High Court then adopted a non-instrumental interpretation to reject the 2003 “radical surgery.” This paper therefore critically analyzes whether the Kenyan court’s inconsistent interpretations–pertaining to the constitutionality of the 2003 “radical surgery” removing corruption from Kenya’s courts–was predicated on political expediency or human rights principles. If justice “must also seen to be done,” then pursuit of the CJ’s, Judicial Service Commission’s and president’s political or economic interests must be limited by respect for the suspected judges and magistrates’ due process rights. The separation of powers doctrine demands that the dismissed judges should have a right of appeal which entails impartial review by a special independent oversight mechanism. Instead, ignoring fundamental rights, Kenya’s new Supreme Court’s interpretation of another round of vetting under the new 2010 Constitution, ousts the High Court’s judicial review jurisdiction altogether, since removal of judicial corruption is “a constitutional imperative, akin to a national duty upon every judicial officer to pave way for judicial realignment and reformulation.”

Keywords: administrative decisions, corruption, fair hearing, judicial review, (non) instrumental

Procedia PDF Downloads 450
200 Management of Urinary Tract Infections by Nurse Practitioners in a Canadian Pediatric Emergency Department: A Rretrospective Cohort Study

Authors: T. Mcgraw, F. N. Morin, N. Desai

Abstract:

Background: Antimicrobial resistance is a critical issue in global health care and a significant contributor to increased patient morbidity and mortality. Suspected urinary tract infection (UTI) is a key area of inappropriate antibiotic prescription in pediatrics. Management patterns of infectious diseases have been shown to vary by provider type within a single setting. The aim of this study was to assess compliance with national UTI management guidelines by nurse practitioners in a pediatric emergency department (ED). Methods: This was a post-hoc analysis of a retrospective cohort study to review and evaluate visits to a tertiary care freestanding pediatric emergency department. Patients were included if they were 60 days to 36 months old and discharged with a diagnosis of UTI or ‘rule-out UTI’ between July 2015 and July 2020. Primary outcome measure was proportion of visits seen by Nurse Practitioners (NP) which were associated with national guideline compliance in the diagnosis and treatment of suspected UTI. We performed descriptive statistics and comparative analyses to determine differences in practice patterns between NPs, and physicians. Results: A total of 636 charts were reviewed, of which 402 patients met inclusion criteria. 17 patients were treated by NPs, 385 were treated by either Pediatric Emergency Medicine physicians (PEM) or non-PEM physicians. Overall, the proportion of infants receiving guideline-compliant care was 25.9% (21.8-30.4%). Of those who were prescribed antibiotics, 79.6% (74.7-83.8%) received first line guideline recommended therapy and 58.9% (53.8-63.8%) received fully compliant therapy with respect to age, dose, duration, and frequency. In patients treated by NPs, 16/17 (94%(95% CI:73.0-99.0)) required antibiotics, 15/16 (93%(95% CI: 71.7-98.9)) were treated with first line agent (cephalexin), 8/16 (50%(95% CI:28-72)) were guideline compliant of dose and duration. 5/8 (63%(95% CI:30.6-86.3)) were noncompliant for dose being too high. There was no difference in receiving guideline compliant empiric antibiotic therapy between physicians and nurse practitioners (OR: 0.837 CI: 0.302-2.69). Conclusion: In this post-hoc analysis, guideline noncompliance by nurse practitioners is common in children tested and treated for UTIs in a pediatric emergency department. Care by a Nurse Practitioner was not associated with greater rate of noncompliance than care by a Pediatric Emergency Medicine physician. Future appropriately powered studies may focus on confirming these results.

Keywords: antibiotic stewardship, infectious disease, nurse practitioner, urinary tract infection

Procedia PDF Downloads 81
199 The Development and Change of Settlement in Tainan County (1904-2015) Using Historical Geographic Information System

Authors: Wei Ting Han, Shiann-Far Kung

Abstract:

In the early time, most of the arable land is dry farming and using rainfall as water sources for irrigation in Tainan county. After the Chia-nan Irrigation System (CIS) was completed in 1930, Chia-nan Plain was more efficient allocation of limited water sources or irrigation, because of the benefit from irrigation systems, drainage systems, and land improvement projects. The problem of long-term drought, flood and salt damage in the past were also improved by CIS. The canal greatly improved the paddy field area and agricultural output, Tainan county has become one of the important agricultural producing areas in Taiwan. With the development of water conservancy facilities, affected by national policies and other factors, many agricultural communities and settlements are formed indirectly, also promoted the change of settlement patterns and internal structures. With the development of historical geographic information system (HGIS), Academia Sinica developed the WebGIS theme with the century old maps of Taiwan which is the most complete historical map of database in Taiwan. It can be used to overlay historical figures of different periods, present the timeline of the settlement change, also grasp the changes in the natural environment or social sciences and humanities, and the changes in the settlements presented by the visualized areas. This study will explore the historical development and spatial characteristics of the settlements in various areas of Tainan County. Using of large-scale areas to explore the settlement changes and spatial patterns of the entire county, through the dynamic time and space evolution from Japanese rule to the present day. Then, digitizing the settlement of different periods to perform overlay analysis by using Taiwan historical topographic maps in 1904, 1921, 1956 and 1989. Moreover, using document analysis to analyze the temporal and spatial changes of regional environment and settlement structure. In addition, the comparison analysis method is used to classify the spatial characteristics and differences between the settlements. Exploring the influence of external environments in different time and space backgrounds, such as government policies, major construction, and industrial development. This paper helps to understand the evolution of the settlement space and the internal structural changes in Tainan County.

Keywords: historical geographic information system, overlay analysis, settlement change, Tainan County

Procedia PDF Downloads 105
198 Downtime Estimation of Building Structures Using Fuzzy Logic

Authors: M. De Iuliis, O. Kammouh, G. P. Cimellaro, S. Tesfamariam

Abstract:

Community Resilience has gained a significant attention due to the recent unexpected natural and man-made disasters. Resilience is the process of maintaining livable conditions in the event of interruptions in normally available services. Estimating the resilience of systems, ranging from individuals to communities, is a formidable task due to the complexity involved in the process. The most challenging parameter involved in the resilience assessment is the 'downtime'. Downtime is the time needed for a system to recover its services following a disaster event. Estimating the exact downtime of a system requires a lot of inputs and resources that are not always obtainable. The uncertainties in the downtime estimation are usually handled using probabilistic methods, which necessitates acquiring large historical data. The estimation process also involves ignorance, imprecision, vagueness, and subjective judgment. In this paper, a fuzzy-based approach to estimate the downtime of building structures following earthquake events is proposed. Fuzzy logic can integrate descriptive (linguistic) knowledge and numerical data into the fuzzy system. This ability allows the use of walk down surveys, which collect data in a linguistic or a numerical form. The use of fuzzy logic permits a fast and economical estimation of parameters that involve uncertainties. The first step of the method is to determine the building’s vulnerability. A rapid visual screening is designed to acquire information about the analyzed building (e.g. year of construction, structural system, site seismicity, etc.). Then, a fuzzy logic is implemented using a hierarchical scheme to determine the building damageability, which is the main ingredient to estimate the downtime. Generally, the downtime can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by rational and irrational delays (DT2); and downtime due to utilities disruption (DT3). In this work, DT1 is computed by relating the building damageability results obtained from the visual screening to some already-defined components repair times available in the literature. DT2 and DT3 are estimated using the REDITM Guidelines. The Downtime of the building is finally obtained by combining the three components. The proposed method also allows identifying the downtime corresponding to each of the three recovery states: re-occupancy; functional recovery; and full recovery. Future work is aimed at improving the current methodology to pass from the downtime to the resilience of buildings. This will provide a simple tool that can be used by the authorities for decision making.

Keywords: resilience, restoration, downtime, community resilience, fuzzy logic, recovery, damage, built environment

Procedia PDF Downloads 137
197 Acceptability Process of a Congestion Charge

Authors: Amira Mabrouk

Abstract:

This paper deals with the acceptability of urban toll in Tunisia. The price-based regulation, i.e. urban toll, is the outcome of a political process hampered by three-fold objectives: effectiveness, equity and social acceptability. This produces both economic interest groups and functions that are of incongruent preferences. The plausibility of this speculation goes hand in hand with the fact that these economic interest groups are also taxpayers who undeniably perceive urban toll as an additional charge. This wariness is coupled with an inquiry about the conditions of usage, the redistribution of the collected tax revenue and the idea of the leviathan state completes the picture. In a nutshell, if researches related to road congestion proliferate, no de facto legitimacy can be pleaded. Nonetheless, the theory on urban tolls engenders economists’ questioning of ways to reduce negative external effects linked to it. Only then does the urban toll appear to bear an answer to these issues. Undeniably, the urban toll suggests inherent conflicts due to the apparent no-payment principal of a public asset as well as to the social perception of the new measure as a mere additional charge. However, when the main concern is effectiveness is its broad sense and the social well-being, the main factors that determine the acceptability of such a tariff measure along with the type of incentives should be the object of a thorough, in-depth analysis. Before adopting this economic role, one has to recognize the factors that intervene in the acceptability of a congestion toll which brought about a copious number of articles and reports that lacked mostly solid theoretical content. It is noticeable that nowadays uncertainties float over the exact nature of the acceptability process. Accepting a congestion tariff could differ from one era to another, from one region to another and from one population to another, etc. Notably, this article, within a convenient time frame, attempts at bringing into focus a link between the social acceptability of the urban congestion toll and the value of time through a survey method barely employed in Tunisia, that of stated preference method. How can the urban toll, as a tax, be defined, justified and made acceptable? How can an equitable and effective tariff of congestion toll be reached? How can the costs of this urban toll be covered? In what way can we make the redistribution of the urban toll revenue visible and economically equitable? How can the redistribution of the revenue of urban toll compensate the disadvantaged while introducing such a tariff measure? This paper will offer answers to these research questions and it follows the line of contribution of JULES DUPUIT in 1844.

Keywords: congestion charge, social perception, acceptability, stated preferences

Procedia PDF Downloads 260
196 Administrative Supervision of Local Authorities’ Activities in Selected European Countries

Authors: Alina Murtishcheva

Abstract:

The development of an effective system of administrative supervision is a prerequisite for the functioning of local self-government on the basis of the rule of law. Administrative supervision of local self-government is of particular importance in the EU countries due to the influence of integration processes. The central authorities act on the international level; however, subnational authorities also have to implement European legislation in order to strengthen integration. Therefore, the central authority, being the connecting link between supranational and subnational authorities, should bear responsibility, including financial responsibility, for possible mistakes of subnational authorities. Consequently, the state should have sufficient mechanisms of control over local and regional authorities in order to correct their mistakes. At the same time, the control mechanisms do not deny the autonomy of local self-government. The paper analyses models of administrative supervision of local self-government in Ukraine, Poland, Lithuania, Belgium, Great Britain, Italy, and France. The research methods used in this paper are theoretical methods of analysis of scientific literature, constitutions, legal acts, Congress of Local and Regional Authorities of the Council of Europe reports, and constitutional court decisions, as well as comparative and logical analysis. The legislative basis of administrative supervision was scrutinized, and the models of administrative supervision were classified, including a priori control and ex-post control or their combination. The advantages and disadvantages of these models of administrative supervision are analysed. Compliance with Article 8 of the European Charter of Local Self-Government is of great importance for countries achieving common goals and sharing common values. However, countries under study have problems and, in some cases, demonstrate non-compliance with provisions of Article 8. Such non-conformity as the endorsement of a mayor by the Flemish Government in Belgium, supervision with a view to expediency in Great Britain, and the tendency to overuse supervisory power in Poland are analysed. On the basis of research, the tendencies of administrative supervision of local authorities’ activities in selected European countries are described. Several recommendations for Ukraine as a country that had been granted the EU candidate status are formulated. Having emphasised its willingness to become a member of the European community, Ukraine should not only follow the best European practices but also avoid the mistakes of countries that have long-term experience in developing the local self-government institution. This project has received funding from the Research Council of Lithuania (LMTLT), agreement № P-PD-22-194

Keywords: administrative supervision, decentralisation, legality, local authorities, local self-government

Procedia PDF Downloads 35
195 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity

Authors: Justus Enninga

Abstract:

Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.

Keywords: degrowth, green political theory, polycentricity, institutional robustness

Procedia PDF Downloads 153
194 The Emergence of Memory at the Nanoscale

Authors: Victor Lopez-Richard, Rafael Schio Wengenroth Silva, Fabian Hartmann

Abstract:

Memcomputing is a computational paradigm that combines information processing and storage on the same physical platform. Key elements for this topic are devices with an inherent memory, such as memristors, memcapacitors, and meminductors. Despite the widespread emergence of memory effects in various solid systems, a clear understanding of the basic microscopic mechanisms that trigger them is still a puzzling task. We report basic ingredients of the theory of solid-state transport, intrinsic to a wide range of mechanisms, as sufficient conditions for a memristive response that points to the natural emergence of memory. This emergence should be discernible under an adequate set of driving inputs, as highlighted by our theoretical prediction and general common trends can be thus listed that become a rule and not the exception, with contrasting signatures according to symmetry constraints, either built-in or induced by external factors at the microscopic level. Explicit analytical figures of merit for the memory modulation of the conductance are presented, unveiling very concise and accessible correlations between general intrinsic microscopic parameters such as relaxation times, activation energies, and efficiencies (encountered throughout various fields in Physics) with external drives: voltage pulses, temperature, illumination, etc. These building blocks of memory can be extended to a vast universe of materials and devices, with combinations of parallel and independent transport channels, providing an efficient and unified physical explanation for a wide class of resistive memory devices that have emerged in recent years. Its simplicity and practicality have also allowed a direct correlation with reported experimental observations with the potential of pointing out the optimal driving configurations. The main methodological tools used to combine three quantum transport approaches, Drude-like model, Landauer-Buttiker formalism, and field-effect transistor emulators, with the microscopic characterization of nonequilibrium dynamics. Both qualitative and quantitative agreements with available experimental responses are provided for validating the main hypothesis. This analysis also shades light on the basic universality of complex natural impedances of systems out of equilibrium and might help pave the way for new trends in the area of memory formation as well as in its technological applications.

Keywords: memories, memdevices, memristors, nonequilibrium states

Procedia PDF Downloads 72
193 Progressive Damage Analysis of Mechanically Connected Composites

Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan

Abstract:

While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values ​​, and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.

Keywords: puck, finite element, bolted joint, composite

Procedia PDF Downloads 74
192 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability

Procedia PDF Downloads 192
191 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 185
190 A Low-Cost and Easy-To-Operate Remediation Technology of Heavy Metals Contaminated Agricultural Soil

Authors: Xiao-Hua Zhu, Xin Yuan, Yi-Ran Zhao

Abstract:

High-cadmium pollution in rice is a serious problem in many parts of China. Many kinds of remediation technologies have been tested and applied in many farmlands. Because of the productive function of the farmland, most technologies are inappropriate due to their destruction to the tillage soil layer. And the large labours and expensive fees of many technologies are also the restrictive factors for their applications. The conception of 'Root Micro-Geochemical Barrier' was proposed to reduce cadmium (Cd) bioavailability and the concentration of the cadmium in rice. Remediation and mitigation techniques were demonstrated on contaminated farmland in the downstream of some mine. According to the rule of rice growth, Cd would be absorbed by the crops in every growth stage, and the plant-absorb efficiency in the first stage of the tillering stage is almost the highest. We should create a method to protect the crops from heavy metal pollution, which could begin to work from the early growth stage. Many materials with repair property get our attention. The materials will create a barrier preventing Cd from being absorbed by the crops during all the growing process because the material has the ability to adsorb soil-Cd and making it losing its migration activity. And we should choose a good chance to put the materials into the crop-growing system cheaply as soon as early. Per plant, rice has a little root system scope, which makes the roots reach about 15cm deep and 15cm wide. So small root radiation area makes it possible for all the Cd approaching the roots to be adsorbed with a small amount of adsorbent. Mixing the remediation materials with the seed-raising soli and adding them to the tillage soil in the process of transplanting seedlings, we can control the soil-Cd activity in the range of roots to reduce the Cd-amount absorbed by the crops. Of course, the mineral materials must have enough adsorptive capacity and no additional pollution. More than 3000 square meters farmlands have been remediated. And on the application of root micro-geochemical barrier, the Cd-concentration in rice and the remediation-cost have been decreased by 90% and 80%, respectively, with little extra labour brought to the farmers. The Cd-concentrations in rice from remediated farmland have been controlled below 0.1 ppm. The remediation of one acre of contaminated cropland costs less than $100. The concept has its advantage in the remediation of paddy field contaminated by Cd, especially for the field with outside pollution sources.

Keywords: cadmium pollution, growth stage, cost, root micro-geochemistry barrier

Procedia PDF Downloads 61
189 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose

Authors: Kumar Shashvat, Amol P. Bhondekar

Abstract:

In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.

Keywords: odor classification, generative models, naive bayes, linear discriminant analysis

Procedia PDF Downloads 357
188 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction

Authors: Mohammad Ghahramani, Fahimeh Saei Manesh

Abstract:

Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.

Keywords: soccer, analytics, machine learning, database

Procedia PDF Downloads 214
187 Integrating Data Mining with Case-Based Reasoning for Diagnosing Sorghum Anthracnose

Authors: Mariamawit T. Belete

Abstract:

Cereal production and marketing are the means of livelihood for millions of households in Ethiopia. However, cereal production is constrained by technical and socio-economic factors. Among the technical factors, cereal crop diseases are the major contributing factors to the low yield. The aim of this research is to develop an integration of data mining and knowledge based system for sorghum anthracnose disease diagnosis that assists agriculture experts and development agents to make timely decisions. Anthracnose diagnosing systems gather information from Melkassa agricultural research center and attempt to score anthracnose severity scale. Empirical research is designed for data exploration, modeling, and confirmatory procedures for testing hypothesis and prediction to draw a sound conclusion. WEKA (Waikato Environment for Knowledge Analysis) was employed for the modeling. Knowledge based system has come across a variety of approaches based on the knowledge representation method; case-based reasoning (CBR) is one of the popular approaches used in knowledge-based system. CBR is a problem solving strategy that uses previous cases to solve new problems. The system utilizes hidden knowledge extracted by employing clustering algorithms, specifically K-means clustering from sampled anthracnose dataset. Clustered cases with centroid value are mapped to jCOLIBRI, and then the integrator application is created using NetBeans with JDK 8.0.2. The important part of a case based reasoning model includes case retrieval; the similarity measuring stage, reuse; which allows domain expert to transfer retrieval case solution to suit for the current case, revise; to test the solution, and retain to store the confirmed solution to the case base for future use. Evaluation of the system was done for both system performance and user acceptance. For testing the prototype, seven test cases were used. Experimental result shows that the system achieves an average precision and recall values of 70% and 83%, respectively. User acceptance testing also performed by involving five domain experts, and an average of 83% acceptance is achieved. Although the result of this study is promising, however, further study should be done an investigation on hybrid approach such as rule based reasoning, and pictorial retrieval process are recommended.

Keywords: sorghum anthracnose, data mining, case based reasoning, integration

Procedia PDF Downloads 61