Search results for: unknown input observer
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3122

Search results for: unknown input observer

2492 The Use of Bimodal Subtitles on Netflix English Movies in Enhancing Vocabulary

Authors: John Lloyd Angolluan, Jennile Caday, Crystal Mae Estrella, Reike Alliyah Taladua, Zion Michael Ysulat

Abstract:

One of the requirements of having the ability to communicate in English is by having adequate vocabulary. Nowadays, people are more engaged in watching movie streams on which they can watch movies in a very portable way, such as Netflix. Wherein Netflix became global demand for online media has taken off in recent years. This research aims to know whether the use of bimodal subtitles on Netflix English movies can enhance vocabulary. This study is quantitative and utilizes a descriptive method, and this study aims to explore the use of bimodal subtitles on Netflix English movies to enhance the vocabulary of students. The respondents of the study were the selected Second-year English majors of Rizal Technological University Pasig and Boni Campus using the purposive sampling technique. The researcher conducted a survey questionnaire through the use of Google Forms. In this study, the weighted mean was used to evaluate the student's responses to the statement of the problems of the study of the use of bimodal subtitles on Netflix English movies. The findings of this study revealed that the bimodal subtitle on Netflix English movies enhanced students’ vocabulary learning acquisition by providing learners with access to large amounts of real and comprehensible language input, whether accidentally or intentionally, and it turns out that bimodal subtitles on Netflix English movies help students recognize vocabulary, which has a positive impact on their vocabulary building. Therefore, the researchers advocate that watching English Netflix movies enhances students' vocabulary by using bimodal subtitled movie material during their language learning process, which may increase their motivation and the usage of bimodal subtitles in learning new vocabulary. Bimodal subtitles need to be incorporated into educational film activities to provide students with a vast amount of input to expand their vocabulary.

Keywords: bimodal subtitles, Netflix, English movies, vocabulary, subtitle, language, media

Procedia PDF Downloads 68
2491 Effect of Hypoxia on AOX2 Expression in Chlamydomonas reinhardtii

Authors: Maria Ostroukhova, Zhanneta Zalutskaya, Elena Ermilova

Abstract:

The alternative oxidase (AOX) mediates cyanide-resistant respiration, which bypasses proton-pumping complexes III and IV of the cytochrome pathway to directly transfer electrons from reduced ubiquinone to molecular oxygen. In Chlamydomonas reinhardtii, AOX is a monomeric protein that is encoded by two genes of discrete subfamilies, AOX1 and AOX2. Although AOX has been proposed to play essential roles in stress tolerance of organisms, the role of subfamily AOX2 is largely unknown. In C. reinhardtii, AOX2 was initially identified as one of constitutively low expressed genes. Like other photosynthetic organisms C. reinhardtii cells frequently experience periods of hypoxia. To examine AOX2 transcriptional regulation and role of AOX2 in hypoxia adaptation, real-time PCR analysis and artificial microRNA method were employed. Two experimental approaches have been used to induce the anoxic conditions: dark-anaerobic and light-anaerobic conditions. C. reinhardtii cells exposed to the oxygen deprivation have shown increased AOX2 mRNA levels. By contrast, AOX1 was not an anoxia-responsive gene. In C. reinhardtii, a subset of genes is regulated by transcription factor CRR1 in anaerobic conditions. Notable, the AOX2 promoter region contains the potential motif for CRR1 binding. Therefore, the role of CRR1 in the control of AOX2 transcription was tested. The CRR1-underexpressing strains, that were generated and characterized in this work, exhibited low levels of AOX2 transcripts under anoxic conditions. However, the transformants still slightly induced AOX2 gene expression in the darkness. These confirmed our suggestions that darkness is a regulatory stimulus for AOX genes in C. reinhardtii. Thus, other factors must contribute to AOX2 promoter activity under dark-anoxic conditions. Moreover, knock-down of CRR1 caused a complete reduction of AOX2 expression under light-anoxic conditions. These results indicate that (1) CRR1 is required for AOX2 expression during hypoxia, and (2) AOX2 gene is regulated by CRR1 together with yet-unknown regulatory factor(s). In addition, the AOX2-underexpressing strains were generated. The analysis of amiRNA-AOX2 strains suggested a role of this alternative oxidase in hypoxia adaptation of the alga. In conclusion, the results reported here show that C. reinhardtii AOX2 gene is stress inducible. CRR1 transcriptional factor is involved in the regulation of the AOX2 gene expression in the absence of oxygen. Moreover, AOX2 but not AOX1 functions under oxygen deprivation. This work was supported by Russian Science Foundation (research grant № 16-14-10004).

Keywords: alternative oxidase 2, artificial microRNA approach, chlamydomonas reinhardtii, hypoxia

Procedia PDF Downloads 227
2490 Ultra-Reliable Low Latency V2X Communication for Express Way Using Multiuser Scheduling Algorithm

Authors: Vaishali D. Khairnar

Abstract:

The main aim is to provide lower-latency and highly reliable communication facilities for vehicles in the automobile industry; vehicle-to-everything (V2X) communication basically intends to increase expressway road security and its effectiveness. The Ultra-Reliable Low-Latency Communications (URLLC) algorithm and cellular networks are applied in combination with Mobile Broadband (MBB). This is particularly used in express way safety-based driving applications. Expressway vehicle drivers (humans) will communicate in V2X systems using the sixth-generation (6G) communication systems which have very high-speed mobility features. As a result, we need to determine how to ensure reliable and consistent wireless communication links and improve the quality to increase channel gain, which is becoming a challenge that needs to be addressed. To overcome this challenge, we proposed a unique multi-user scheduling algorithm for ultra-massive multiple-input multiple-output (MIMO) systems using 6G. In wideband wireless network access in case of high traffic and also in medium traffic conditions, moreover offering quality-of-service (QoS) to distinct service groups with synchronized contemporaneous traffic on the highway like the Mumbai-Pune expressway becomes a critical problem. Opportunist MAC (OMAC) is a way of proposing communication across a wireless communication link that can change in space and time and might overcome the above-mentioned challenge. Therefore, a multi-user scheduling algorithm is proposed for MIMO systems using a cross-layered MAC protocol to achieve URLLC and high reliability in V2X communication.

Keywords: ultra-reliable low latency communications, vehicle-to-everything communication, multiple-input multiple-output systems, multi-user scheduling algorithm

Procedia PDF Downloads 76
2489 Is Obesity Associated with CKD-(unknown) in Sri Lanka? A Protocol for a Cross Sectional Survey

Authors: Thaminda Liyanage, Anuga Liyanage, Chamila Kurukulasuriya, Sidath Bandara

Abstract:

Background: The burden of chronic kidney disease (CKD) is growing rapidly around the world, particularly in Asia. Over the last two decades Sri Lanka has experienced an epidemic of CKD with ever growing number of patients pursuing medical care due to CKD and its complications, specially in the “Mahaweli” river basin in north central region of the island nation. This was apparently a new form of CKD which was not attributable to conventional risk factors such as diabetes mellitus, hypertension or infection and widely termed as “CKD-unknown” or “CKDu”. In the past decade a number of small scale studies were conducted to determine the aetiology, prevalence and complications of CKDu in North Central region. These hospital-based studies did not provide an accurate estimate of the problem as merely 10% or less of the people with CKD are aware of their diagnosis even in developed countries with better access to medical care. Interestingly, similar observations were made on the changing epidemiology of obesity in the region but no formal study was conducted to date to determine the magnitude of obesity burden. Moreover, if increasing obesity in the region is associated with CKD epidemic is yet to be explored. Methods: We will conduct an area wide cross sectional survey among all adult residents of the “Mahaweli” development project area 5, in the North Central Province of Sri Lanka. We will collect relevant medical history, anthropometric measurements, blood and urine for hematological and biochemical analysis. We expect a participation rate of 75%-85% of all eligible participants. Participation in the study is voluntary, there will be no incentives provided for participation. Every analysis will be conducted in a central laboratory and data will be stored securely. We will calculate the prevalence of obesity and chronic kidney disease, overall and by stage using total number of participants as the denominator and report per 1000 population. The association of obesity and CKD will be assessed with regression models and will be adjusted for potential confounding factors and stratified by potential effect modifiers where appropriate. Results: This study will provide accurate information on the prevalence of obesity and CKD in the region. Furthermore, this will explore the association between obesity and CKD, although causation may not be confirmed. Conclusion: Obesity and CKD are increasingly recognized as major public health problems in Sri Lanka. Clearly, documenting the magnitude of the problem is the essential first step. Our study will provide this vital information enabling the government to plan a coordinated response to tackle both obesity and CKD in the region.

Keywords: BMI, Chronic Kidney Disease, obesity, Sri Lanka

Procedia PDF Downloads 255
2488 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body

Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi

Abstract:

The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.

Keywords: Accu-Check, diabetes, neural network, pattern recognition

Procedia PDF Downloads 134
2487 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms

Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager

Abstract:

This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.

Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties

Procedia PDF Downloads 46
2486 Evaluation of Efficiency of Naturally Available Disinfectants and Filter Media in Conventional Gravity Filters

Authors: Abhinav Mane, Kedar Karvande, Shubham Patel, Abhayraj Lodha

Abstract:

Gravity filters are one of the most commonly used, economically viable and moderately efficient water purification systems. Their efficiency is mainly based on the type of filter media installed and its location within the filter mass. Several researchers provide valuable input in decision of the type of filter media. However, the choice is mainly restricted to the chemical combinations of different substances. This makes it very much dependent on the factory made filter media, and no cheap alternatives could be found and used. This paper presents the use of disinfectants and filter medias either available naturally or could be prepared using natural resources in conventional mechanism of gravity filter. A small scale laboratory investigation was made with variation in filter media thickness and its location from the top surface of the filter. A rigid steel frame based custom fabricated test setup was used to facilitate placement of filter media at different height within the filter mass. Finely grinded sun dried Neem (Azadirachta indica) extracts and porous burnt clay pads were used as two distinct filter media and placed in isolation as well as in combination with each other. Ground water available in Marathwada region of Maharashtra, India which mainly consists of harmful materials like Arsenic, Chlorides, Iron, Magnesium and Manganese, etc. was treated in the filters fabricated in the present study. The evaluation was made mainly in terms of the input/output water quality assessment through laboratory tests. The present paper should give a cheap and eco-friendly solution to prepare gravity filter at the merit of household skills and availability.

Keywords: fliter media, gravity filters, natural disinfectants, porous clay pads

Procedia PDF Downloads 244
2485 Exploring Time-Series Phosphoproteomic Datasets in the Context of Network Models

Authors: Sandeep Kaur, Jenny Vuong, Marcel Julliard, Sean O'Donoghue

Abstract:

Time-series data are useful for modelling as they can enable model-evaluation. However, when reconstructing models from phosphoproteomic data, often non-exact methods are utilised, as the knowledge regarding the network structure, such as, which kinases and phosphatases lead to the observed phosphorylation state, is incomplete. Thus, such reactions are often hypothesised, which gives rise to uncertainty. Here, we propose a framework, implemented via a web-based tool (as an extension to Minardo), which given time-series phosphoproteomic datasets, can generate κ models. The incompleteness and uncertainty in the generated model and reactions are clearly presented to the user via the visual method. Furthermore, we demonstrate, via a toy EGF signalling model, the use of algorithmic verification to verify κ models. Manually formulated requirements were evaluated with regards to the model, leading to the highlighting of the nodes causing unsatisfiability (i.e. error causing nodes). We aim to integrate such methods into our web-based tool and demonstrate how the identified erroneous nodes can be presented to the user via the visual method. Thus, in this research we present a framework, to enable a user to explore phosphorylation proteomic time-series data in the context of models. The observer can visualise which reactions in the model are highly uncertain, and which nodes cause incorrect simulation outputs. A tool such as this enables an end-user to determine the empirical analysis to perform, to reduce uncertainty in the presented model - thus enabling a better understanding of the underlying system.

Keywords: κ-models, model verification, time-series phosphoproteomic datasets, uncertainty and error visualisation

Procedia PDF Downloads 238
2484 Management Effects on Different Sustainable Agricultural with Diverse Topography

Authors: Kusay Wheib, Alexandra Krvchenko

Abstract:

Crop yields are influenced by many factors, including natural ones, such as soil and environmental characteristics of the agricultural land, as well as manmade ones, such as management applications. One of the factors that frequently affect crop yields in undulating Midwest landscapes is topography, which controls the movement of water and nutrients necessary for plant life. The main objective of this study is to examine how field topography influences performance of different management practices in undulated terrain of southwest Michigan. A total of 26 agricultural fields, ranging in size from 1.1 to 7.4 ha, from the Scale-Up at Kellogg Biological Station were included in the study. The two studied factors were crop species with three levels, i.e., corn (Zea mays L.) soybean (Glycine max L.), and wheat (Triticum aestivum L.), and management practice with three levels, i.e., conventional, low input, and organic managements. They were compared under three contrasting topographical settings, namely, summit (includes summits and shoulders), slope (includes backslopes), and depression (includes footslope and toeslope). Yield data of years 2007 through 2012 was processed, cleaned, and filtered, average yield then was calculated for each field, topographic setting, and year. Topography parameters, including terrain, slope, curvature, flow direction and wetness index were computed under ArcGIS environment for each topographic class of each field to seek their effects on yield. Results showed that topographical depressions produced greatest yields in most studied fields, while managements with chemical inputs, both low input and conventional, resulted in higher yields than the organic management.

Keywords: sustainable agriculture, precision agriculture, topography, yield

Procedia PDF Downloads 102
2483 Effect of Mach Number for Gust-Airfoil Interatcion Noise

Authors: ShuJiang Jiang

Abstract:

The interaction of turbulence with airfoil is an important noise source in many engineering fields, including helicopters, turbofan, and contra-rotating open rotor engines, where turbulence generated in the wake of upstream blades interacts with the leading edge of downstream blades and produces aerodynamic noise. One approach to study turbulence-airfoil interaction noise is to model the oncoming turbulence as harmonic gusts. A compact noise source produces a dipole-like sound directivity pattern. However, when the acoustic wavelength is much smaller than the airfoil chord length, the airfoil needs to be treated as a non-compact source, and the gust-airfoil interaction becomes more complicated and results in multiple lobes generated in the radiated sound directivity. Capturing the short acoustic wavelength is a challenge for numerical simulations. In this work, simulations are performed for gust-airfoil interaction at different Mach numbers, using a high-fidelity direct Computational AeroAcoustic (CAA) approach based on a spectral/hp element method, verified by a CAA benchmark case. It is found that the squared sound pressure varies approximately as the 5th power of Mach number, which changes slightly with the observer location. This scaling law can give a better sound prediction than the flat-plate theory for thicker airfoils. Besides, another prediction method, based on the flat-plate theory and CAA simulation, has been proposed to give better predictions than the scaling law for thicker airfoils.

Keywords: aeroacoustics, gust-airfoil interaction, CFD, CAA

Procedia PDF Downloads 58
2482 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach

Authors: Gong Zhilin, Jing Yang, Jian Yin

Abstract:

The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).

Keywords: credit card, data mining, fraud detection, money transactions

Procedia PDF Downloads 115
2481 Beyond Adoption: Econometric Analysis of Impacts of Farmer Innovation Systems and Improved Agricultural Technologies on Rice Yield in Ghana

Authors: Franklin N. Mabe, Samuel A. Donkoh, Seidu Al-Hassan

Abstract:

In order to increase and bridge the differences in rice yield, many farmers have resorted to adopting Farmer Innovation Systems (FISs) and Improved Agricultural Technologies (IATs). This study econometrically analysed the impacts of adoption of FISs and IATs on rice yield using multinomial endogenous switching regression (MESR). Nine-hundred and seven (907) rice farmers from Guinea Savannah Zone (GSZ), Forest Savannah Transition Zone (FSTZ) and Coastal Savannah Zone (CSZ) were used for the study. The study used both primary and secondary data. FBO advice, rice farming experience and distance from farming communities to input markets increase farmers’ adoption of only FISs. Factors that increase farmers’ probability of adopting only IATs are access to extension advice, credit, improved seeds and contract farming. Farmers located in CSZ have higher probability of adopting only IATs than their counterparts living in other agro-ecological zones. Age and access to input subsidy increase the probability of jointly adopting FISs and IATs. FISs and IATs have heterogeneous impact on rice yield with adoption of only IATs having the highest impact followed by joint adoption of FISs and IATs. It is important for stakeholders in rice subsector to champion the provision of improved rice seeds, the intensification of agricultural extension services and contract farming concept. Researchers should endeavour to researched into FISs.

Keywords: farmer innovation systems, improved agricultural technologies, multinomial endogenous switching regression, treatment effect

Procedia PDF Downloads 405
2480 From Transference Love to Self Alienation in the Therapeutic Relationship: A Case Study

Authors: Efi Koutantou

Abstract:

The foundation of successful therapy is the bond between the psychotherapist and the patient, Psychoanalysis would argue. The present study explores lived experiences of a psychotherapeutic relationship in different moments, initial and final with special reference to the transference love developed through the process. The fight between moments of ‘leaving a self’ behind and following ‘lines of flight’ in the process of creating a new subjectivity and ‘becoming-other’ will be explored. Moments between de-territorialisation – surpassing given constraints such as gender, family and religion, kinship bonds - freeing the space in favor of re-territorialisation – creation of oneself creation of oneself will also be analyzed. The generation of new possibilities of being, new ways of self-actualization for this patient will be discussed. The second part of this study will explore the extent to which this ‘transference love’ results for this specific patient to become ‘the discourse of the other’; it is a desideratum whether the patient finally becomes a subject of his/her own through his/her own self-exploration of new possibilities of existence or becomes alienated within the thought of the therapist. The way in which the patient uses or is (ab)used by the transference love in order to experience and undergo alienation from an ‘authority’ which may or may not sacrifice his/her own thought in favor of satisfying the therapist will be investigated. Finally, from an observer’s perspective and from the analysis of the results of this therapeutic relationship, the counter-transference will also be analyzed, in terms of an attempt of the analyst to relive and satisfy his/her own desires through the life of the analysand. The accession and fall of an idealized self will be analyzed, the turn of the transference love into ‘hate’ will conclude this case study through a lived experience in the therapeutic procedure; a relationship which can be called to be a mixture of a real relationship and remnants from a past object relationship.

Keywords: alienation, authority, counter-transference, hate, transference love

Procedia PDF Downloads 196
2479 Artificial Intelligence in Melanoma Prognosis: A Narrative Review

Authors: Shohreh Ghasemi

Abstract:

Introduction: Melanoma is a complex disease with various clinical and histopathological features that impact prognosis and treatment decisions. Traditional methods of melanoma prognosis involve manual examination and interpretation of clinical and histopathological data by dermatologists and pathologists. However, the subjective nature of these assessments can lead to inter-observer variability and suboptimal prognostic accuracy. AI, with its ability to analyze vast amounts of data and identify patterns, has emerged as a promising tool for improving melanoma prognosis. Methods: A comprehensive literature search was conducted to identify studies that employed AI techniques for melanoma prognosis. The search included databases such as PubMed and Google Scholar, using keywords such as "artificial intelligence," "melanoma," and "prognosis." Studies published between 2010 and 2022 were considered. The selected articles were critically reviewed, and relevant information was extracted. Results: The review identified various AI methodologies utilized in melanoma prognosis, including machine learning algorithms, deep learning techniques, and computer vision. These techniques have been applied to diverse data sources, such as clinical images, dermoscopy images, histopathological slides, and genetic data. Studies have demonstrated the potential of AI in accurately predicting melanoma prognosis, including survival outcomes, recurrence risk, and response to therapy. AI-based prognostic models have shown comparable or even superior performance compared to traditional methods.

Keywords: artificial intelligence, melanoma, accuracy, prognosis prediction, image analysis, personalized medicine

Procedia PDF Downloads 62
2478 Diagnostic Efficacy and Usefulness of Digital Breast Tomosynthesis (DBT) in Evaluation of Breast Microcalcifications as a Pre-Procedural Study for Stereotactic Biopsy

Authors: Okhee Woo, Hye Seon Shin

Abstract:

Purpose: To investigate the diagnostic power of digital breast tomosynthesis (DBT) in evaluation of breast microcalcifications and usefulness as a pre-procedural study for stereotactic biopsy in comparison with full-field digital mammogram (FFDM) and FFDM plus magnification image (FFDM+MAG). Methods and Materials: An IRB approved retrospective observer performance study on DBT, FFDM, and FFDM+MAG was done. Image quality was rated in 5-point scoring system for lesion clarity (1, very indistinct; 2, indistinct; 3, fair; 4, clear; 5, very clear) and compared by Wilcoxon test. Diagnostic power was compared by diagnostic values and AUC with 95% confidence interval. Additionally, procedural report of biopsy was analysed for patient positioning and adequacy of instruments. Results: DBT showed higher lesion clarity (median 5, interquartile range 4-5) than FFDM (3, 2-4, p-value < 0.0001), and no statistically significant difference to FFDM+MAG (4, 4-5, p-value=0.3345). Diagnostic sensitivity and specificity of DBT were 86.4% and 92.5%; FFDM 70.4% and 66.7%; FFDM+MAG 93.8% and 89.6%. The AUCs of DBT (0.88) and FFDM+MAG (0.89) were larger than FFDM (0.59, p-values < 0.0001) but there was no statistically significant difference between DBT and FFDM+MAG (p-value=0.878). In 2 cases with DBT, petit needle could be appropriately prepared; and other 3 without DBT, patient repositioning was needed. Conclusion: DBT showed better image quality and diagnostic values than FFDM and equivalent to FFDM+MAG in the evaluation of breast microcalcifications. Evaluation with DBT as a pre-procedural study for breast stereotactic biopsy can lead to more accurate localization and successful biopsy and also waive the need for additional magnification images.

Keywords: DBT, breast cancer, stereotactic biopsy, mammography

Procedia PDF Downloads 288
2477 Application of Digital Tools for Improving Learning

Authors: José L. Jiménez

Abstract:

The use of technology in the classroom is an issue that is constantly evolving. Digital age students learn differently than their teachers did, so now the teacher should be constantly evolving their methods and teaching techniques to be more in touch with the student. In this paper a case study presents how were used some of these technologies by accompanying a classroom course, this in order to provide students with a different and innovative experience as their teacher usually presented the activities to develop. As students worked in the various activities, they increased their digital skills by employing unknown tools that helped them in their professional training. The twenty-first century teacher should consider the use of Information and Communication Technologies in the classroom thinking in skills that students of the digital age should possess. It also takes a brief look at the history of distance education and it is also highlighted the importance of integrating technology as part of the student's training.

Keywords: digital tools, on-line learning, social networks, technology

Procedia PDF Downloads 383
2476 Integrated Simulation and Optimization for Carbon Capture and Storage System

Authors: Taekyoon Park, Seokgoo Lee, Sungho Kim, Ung Lee, Jong Min Lee, Chonghun Han

Abstract:

CO2 capture and storage/sequestration (CCS) is a key technology for addressing the global warming issue. This paper proposes an integrated model for the whole chain of CCS, from a power plant to a reservoir. The integrated model is further utilized to determine optimal operating conditions and study responses to various changes in input variables.

Keywords: CCS, caron dioxide, carbon capture and storage, simulation, optimization

Procedia PDF Downloads 335
2475 Justitium: Endangered Species and Humanitarian Interventions in the Anthropocene Era

Authors: Eleni Panagiotarakou

Abstract:

This paper argues that humans have a collective moral responsibility to help wild animals during the Anthropocene era. Seen from the perspective of deontic logic, this moral responsibility did not exist in the Holocene era (ca. 11,700 BC-1945 AD) on account of humanity’s limited impact on the natural environment. By contrast in the Anthropocene, human activities are causing significant disturbances to planetary ecosystems and by inference to wildlife communities. Under these circumstances controversial and deeply regrettable interventional methods such as Managed Relocations (MR) and synthetic biology should be expanded and become policy measures despite their known and unknown risks. The main rationale for the above stems from the fact that traditional management strategies are simply insufficient in the Anthropocene. If the same anthropogenic activities continue unabated they risk triggering a sixth mass species extinction.

Keywords: anthropocene, humanitarian interventions, managed relocations, species extinctions, synthetic biology

Procedia PDF Downloads 231
2474 To Individualisation of Subject, Donar, by Determination of Serological Markers from Obtain Biological Fluid at Crime Scene

Authors: Arun Kumar, Ravindra Pal Verma, Harsh Sharma, Shani Kumar

Abstract:

For the present study samples was collected from 20 donors with unknown blood group and secretor status had been determined from saliva by using biological fluid. ABO typing on the concentrated samples was successfully performed after 1 month of storage. Urine stained clothing samples are often submitted to forensic science laboratories for ABH blood group antigen determination. The serogenetic markers of semen stains submitted can be used to determine the origin of any of these samples. ABH blood group substances have previously been identified from urine. ABH blood group substance is low in urine in comparison with other body fluids.

Keywords: ABH blood group, crime scene, serological markers, body fluids and urine

Procedia PDF Downloads 568
2473 Application of Semantic Technologies in Rapid Reconfiguration of Factory Systems

Authors: J. Zhang, K. Agyapong-Kodua

Abstract:

Digital factory based on visual design and simulation has emerged as a mainstream to reduce digital development life cycle. Some basic industrial systems are being integrated via semantic modelling, and products (P) matching process (P)-resource (R) requirements are designed to fulfill current customer demands. Nevertheless, product design is still limited to fixed product models and known knowledge of product engineers. Therefore, this paper presents a rapid reconfiguration method based on semantic technologies with PPR ontologies to reuse known and unknown knowledge. In order to avoid the influence of big data, our system uses a cloud manufactory and distributed database to improve the efficiency of querying meeting PPR requirements.

Keywords: semantic technologies, factory system, digital factory, cloud manufactory

Procedia PDF Downloads 472
2472 Application of Response Surface Methodology to Assess the Impact of Aqueous and Particulate Phosphorous on Diazotrophic and Non-Diazotrophic Cyanobacteria Associated with Harmful Algal Blooms

Authors: Elizabeth Crafton, Donald Ott, Teresa Cutright

Abstract:

Harmful algal blooms (HABs), more notably cyanobacteria-dominated HABs, compromise water quality, jeopardize access to drinking water and are a risk to public health and safety. HABs are representative of ecosystem imbalance largely caused by environmental changes, such as eutrophication, that are associated with the globally expanding human population. Cyanobacteria-dominated HABs are anticipated to increase in frequency, magnitude, and are predicted to plague a larger geographical area as a result of climate change. The weather pattern is important as storm-driven, pulse-input of nutrients have been correlated to cyanobacteria-dominated HABs. The mobilization of aqueous and particulate nutrients and the response of the phytoplankton community is an important relationship in this complex phenomenon. This relationship is most apparent in high-impact areas of adequate sunlight, > 20ᵒC, excessive nutrients and quiescent water that corresponds to ideal growth of HABs. Typically the impact of particulate phosphorus is dismissed as an insignificant contribution; which is true for areas that are not considered high-impact. The objective of this study was to assess the impact of a simulated storm-driven, pulse-input of reactive phosphorus and the response of three different cyanobacteria assemblages (~5,000 cells/mL). The aqueous and particulate sources of phosphorus and changes in HAB were tracked weekly for 4 weeks. The first cyanobacteria composition consisted of Planktothrix sp., Microcystis sp., Aphanizomenon sp., and Anabaena sp., with 70% of the total population being non-diazotrophic and 30% being diazotrophic. The second was comprised of Anabaena sp., Planktothrix sp., and Microcystis sp., with 87% diazotrophic and 13% non-diazotrophic. The third composition has yet to be determined as these experiments are ongoing. Preliminary results suggest that both aqueous and particulate sources are contributors of total reactive phosphorus in high-impact areas. The results further highlight shifts in the cyanobacteria assemblage after the simulated pulse-input. In the controls, the reactors dosed with aqueous reactive phosphorus maintained a constant concentration for the duration of the experiment; whereas, the reactors that were dosed with aqueous reactive phosphorus and contained soil decreased from 1.73 mg/L to 0.25 mg/L of reactive phosphorus from time zero to 7 days; this was higher than the blank (0.11 mg/L). Suggesting a binding of aqueous reactive phosphorus to sediment, which is further supported by the positive correlation observed between total reactive phosphorus concentration and turbidity. The experiments are nearly completed and a full statistical analysis will be completed of the results prior to the conference.

Keywords: Anabaena, cyanobacteria, harmful algal blooms, Microcystis, phosphorous, response surface methodology

Procedia PDF Downloads 149
2471 General Principles of Accident Prevention in Built Environment Rehabilitation

Authors: Alfredo Soeiro

Abstract:

Rehabilitation in construction or built environment is a particular type of operations when concerning prevention of accidents. In fact, it is also a different type of task in construction itself. Therefore, due to the complex characteristics of construction rehabilitation tasks and due to the intrinsic difficulty of preventing accidents in construction, a major challenge faces the responsibility for implementing adequate safety levels in this type of safety management. This paper addresses a set of proposed generic measures to face the unknown characteristics of built environment in terms of stability, materials and actual performance of buildings or other constructions. It is also addressed the necessary adaptation of preventive guidelines to this type of delicate refurbishing and renovating of existing facilities. Training, observation and reflective approaches are necessary to perform this safety management in the rehabilitation of built environment.

Keywords: built environment, rehabilitation, construction safety, accident prevention, safety plan

Procedia PDF Downloads 198
2470 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data

Authors: Kai Warsoenke, Maik Mackiewicz

Abstract:

To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.

Keywords: automotive production, machine learning, process optimization, smart tolerancing

Procedia PDF Downloads 99
2469 Analyzing the Influence of Hydrometeorlogical Extremes, Geological Setting, and Social Demographic on Public Health

Authors: Irfan Ahmad Afip

Abstract:

This main research objective is to accurately identify the possibility for a Leptospirosis outbreak severity of a certain area based on its input features into a multivariate regression model. The research question is the possibility of an outbreak in a specific area being influenced by this feature, such as social demographics and hydrometeorological extremes. If the occurrence of an outbreak is being subjected to these features, then the epidemic severity for an area will be different depending on its environmental setting because the features will influence the possibility and severity of an outbreak. Specifically, this research objective was three-fold, namely: (a) to identify the relevant multivariate features and visualize the patterns data, (b) to develop a multivariate regression model based from the selected features and determine the possibility for Leptospirosis outbreak in an area, and (c) to compare the predictive ability of multivariate regression model and machine learning algorithms. Several secondary data features were collected locations in the state of Negeri Sembilan, Malaysia, based on the possibility it would be relevant to determine the outbreak severity in the area. The relevant features then will become an input in a multivariate regression model; a linear regression model is a simple and quick solution for creating prognostic capabilities. A multivariate regression model has proven more precise prognostic capabilities than univariate models. The expected outcome from this research is to establish a correlation between the features of social demographic and hydrometeorological with Leptospirosis bacteria; it will also become a contributor for understanding the underlying relationship between the pathogen and the ecosystem. The relationship established can be beneficial for the health department or urban planner to inspect and prepare for future outcomes in event detection and system health monitoring.

Keywords: geographical information system, hydrometeorological, leptospirosis, multivariate regression

Procedia PDF Downloads 102
2468 Use of Satellite Imaging to Understand Earth’s Surface Features: A Roadmap

Authors: Sabri Serkan Gulluoglu

Abstract:

It is possible with Geographic Information Systems (GIS) that the information about all natural and artificial resources on the earth is obtained taking advantage of satellite images are obtained by remote sensing techniques. However, determination of unknown sources, mapping of the distribution and efficient evaluation of resources are defined may not be possible with the original image. For this reasons, some process steps are needed like transformation, pre-processing, image enhancement and classification to provide the most accurate assessment numerically and visually. Many studies which present the phases of obtaining and processing of the satellite images have examined in the literature study. The research showed that the determination of the process steps may be followed at this subject with the existence of a common whole may provide to progress the process rapidly for the necessary and possible studies which will be.

Keywords: remote sensing, satellite imaging, gis, computer science, information

Procedia PDF Downloads 304
2467 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics

Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima

Abstract:

This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.

Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks

Procedia PDF Downloads 150
2466 A Review of Self-Healing Concrete and Various Methods of Its Scientific Implementation

Authors: Davoud Beheshtizadeh, Davood Jafari

Abstract:

Concrete, with its special properties and advantages, has caused it to be widely and increasingly used in construction industry, especially in infrastructures of the country. On the other hand, some defects of concrete and, most importantly, micro-cracks in the concrete after setting have caused the cost of repair and maintenance of infrastructure; therefore, self-healing concretes have been of attention in other countries in the recent years. These concretes have been repaired with general mechanisms such as physical, chemical, biological and combined mechanisms, each of which has different subsets and methods of execution and operation. Also, some of these types of mechanisms are of high importance, which has led to a special production method, and as this subject is new in Iran, this knowledge is almost unknown or at least some part of it has not been considered at all. The present article completely introduces various self-healing mechanisms as a review and tries to present the disadvantages and advantages of each method along with its scope of application.

Keywords: micro-cracks, self-healing concrete, microcapsules, concrete, cement, self-sensitive

Procedia PDF Downloads 123
2465 Structure of Consciousness According to Deep Systemic Constellations

Authors: Dmitry Ustinov, Olga Lobareva

Abstract:

The method of Deep Systemic Constellations is based on a phenomenological approach. Using the phenomenon of substitutive perception it was established that the human consciousness has a hierarchical structure, where deeper levels govern more superficial ones (reactive level, energy or ancestral level, spiritual level, magical level, and deeper levels of consciousness). Every human possesses a depth of consciousness to the spiritual level, however deeper levels of consciousness are not found for every person. It was found that the spiritual level of consciousness is not homogeneous and has its own internal hierarchy of sublevels (the level of formation of spiritual values, the level of the 'inner observer', the level of the 'path', the level of 'God', etc.). The depth of the spiritual level of a person defines the paradigm of all his internal processes and the main motives of the movement through life. At any level of consciousness disturbances can occur. Disturbances at a deeper level cause disturbances at more superficial levels and are manifested in the daily life of a person in feelings, behavioral patterns, psychosomatics, etc. Without removing the deepest source of a disturbance it is impossible to completely correct its manifestation in the actual moment. Thus a destructive pattern of feeling and behavior in the actual moment can exist because of a disturbance, for example, at the spiritual level of a person (although in most cases the source is at the energy level). Psychological work with superficial levels without removing a source of disturbance cannot fully solve the problem. The method of Deep Systemic Constellations allows one to work effectively with the source of the problem located at any depth. The methodology has confirmed its effectiveness in working with more than a thousand people.

Keywords: constellations, spiritual psychology, structure of consciousness, transpersonal psychology

Procedia PDF Downloads 235
2464 Enhancement of Cross-Linguistic Effect with the Increase in the Multilingual Proficiency during Early Childhood: A Case Study of English Language Acquisition by a Pre-School Child

Authors: Anupama Purohit

Abstract:

The paper is a study on the inevitable cross-linguistic effect found in the early multilingual learners. The cross-linguistic behaviour like code-mixing, code-switching, foreign accent, literal translation, redundancy and syntactic manipulation effected due to other languages on the English language output of a non-native pre-school child are discussed here. A case study method is adopted in this paper to support the claim of the title. A simultaneously tetra lingual pre-school child’s (within 1;3 to 4;0) language behaviour is analysed here. The sample output data of the child is gathered from the diary entries maintained by her family, regular observations and video recordings done since her birth. She is getting the input of her mother tongue, Sambalpuri, from her grandparents only; Hindi, the local language from her play-school and the neighbourhood; English only from her mother and occasional visit of other family friends; Odia only during the reading of the Odia story book. The child is exposed to code-mixing of all the languages throughout her childhood. But code-mixing, literal translation, redundancy and duplication were absent in her initial stage of multilingual acquisition. As the child was more proficient in English in comparison to her other first languages and had never heard code-mixing in English language; it was expected from her input pattern of English (one parent, English language) that she would maintain purity in her use of English while talking to the English language interlocutor. But with gradual increase in the language proficiency in each of the languages of the child, her handling of the multiple codes becomes deft cross-linguistically. It can be deduced from the case study that after attaining certain milestone proficiency in each language, the child’s linguistic faculty can operate at a metalinguistic level. The functional use of each morpheme, their arrangement in words and in the sentences, the supra segmental features, lexical-semantic mapping, culture specific use of a language and the pragmatic skills converge to give a typical childlike multilingual output in an intelligible manner to the multilingual people (with the same set of languages in combination). The result is appealing because for expressing the same ideas which the child used to speak (may be with grammatically wrong expressions) in one language, gradually, she starts showing cross-linguistic effect in her expressions. So the paper pleads for the separatist view from the very beginning of the holophrastic phase (as the child expresses in addressee-specific language); but development of a metalinguistic ability that helps the child in communicating in a sophisticated way according to the linguistic status of the addressee is unique to the multilingual child. This metalinguistic ability is independent of the mode if input of a multilingual child.

Keywords: code-mixing, cross-linguistic effect, early multilingualism, literal translation

Procedia PDF Downloads 287
2463 Design of a Fuzzy Expert System for the Impact of Diabetes Mellitus on Cardiac and Renal Impediments

Authors: E. Rama Devi Jothilingam

Abstract:

Diabetes mellitus is now one of the most common non communicable diseases globally. India leads the world with largest number of diabetic subjects earning the title "diabetes capital of the world". In order to reduce the mortality rate, a fuzzy expert system is designed to predict the severity of cardiac and renal problems of diabetic patients using fuzzy logic. Since uncertainty is inherent in medicine, fuzzy logic is used in this research work to remove the inherent fuzziness of linguistic concepts and uncertain status in diabetes mellitus which is the prime cause for the cardiac arrest and renal failure. In this work, the controllable risk factors "blood sugar, insulin, ketones, lipids, obesity, blood pressure and protein/creatinine ratio" are considered as input parameters and the "the stages of cardiac" (SOC)" and the stages of renal" (SORD) are considered as the output parameters. The triangular membership functions are used to model the input and output parameters. The rule base is constructed for the proposed expert system based on the knowledge from the medical experts. Mamdani inference engine is used to infer the information based on the rule base to take major decision in diagnosis. Mean of maximum is used to get a non fuzzy control action that best represent possibility distribution of an inferred fuzzy control action. The proposed system also classifies the patients with high risk and low risk using fuzzy c means clustering techniques so that the patients with high risk are treated immediately. The system is validated with Matlab and is used as a tracking system with accuracy and robustness.

Keywords: Diabetes mellitus, fuzzy expert system, Mamdani, MATLAB

Procedia PDF Downloads 278