Search results for: feature detection and description
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5433

Search results for: feature detection and description

813 Academic Mobility within EU as a Voluntary or a Necessary Move: The Case of German Academics in the UK

Authors: Elena Samarsky

Abstract:

According to German national records and willingness to migrate surveys, emigration is much more attractive for better educated citizens employed in white-collar positions, with academics displaying the highest migration rate. The case study of academic migration from Germany is furthermore intriguing due to the country's financial power, competitive labour market and relatively good life-standards, working conditions and high wage rates. Investigation of such mobility challenges traditional economic view on migration, as it raises the question of why people chose to leave their highly-industrialized countries known for their high life-standards, stable political scene and prosperous economy. Within the regional domain, examining mobility of Germans contributes to the ongoing debate over the extent of influence of the EU mobility principle on migration decision. The latter is of particular interest, as it may shed the light on the extent to which it frames individual migration path, defines motivations and colours the experiences of migration action itself. The paper is based on the analysis of the migration decisions obtained through in-depth interviews with German academics employed in the UK. These retrospective interviews were conducted with German academies across selected universities in the UK, employed in a variety of academic fields, and different career stages. Interviews provide a detailed description of what motivated people to search for a post in another country, which attributes of such job are needed to be satisfied in order to facilitate migration, as well as general information on particularities of an academic career and institutions involved. In the course of the project, it became evident that although securing financial stability was non-negotiable factor in migration (e.g., work contract singed before relocation) non-pecuniary motivations played significant role as well. Migration narratives of this group - the highly skilled, whose human capital is transferable, and whose expertise is positively evaluated by countries, is mainly characterised by search for personal development and career advancement, rather than a direct increase in their income. Such records are also consistent in showing that in case of academics, scientific freedom and independence are the main attributes of a perfect job and are a substantial motivator. On the micro level, migration is rather depicted as an opportunistic action addressed in terms of voluntary and rather imposed decision. However, on the macro level, findings allow suggesting that such opportunities are rather an outcome embedded in the peculiarities of academia and its historical and structural developments. This, in turn, contributes significantly to emergence of a scene in which migration action takes place. The paper suggest further comparative research on the intersection of the macro and micro level, and in particular how both national academic institutions and the EU mobility principle shape migration of academics. In light of continuous attempts to make the European labour market more mobile and attractive such findings ought to have direct implications on policy.

Keywords: migration, EU, academics, highly skilled labour

Procedia PDF Downloads 249
812 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 25
811 Rumination Time and Reticuloruminal Temperature around Calving in Eutocic and Dystocic Dairy Cows

Authors: Levente Kovács, Fruzsina Luca Kézér, Ottó Szenci

Abstract:

Prediction of the onset of calving and recognizing difficulties at calving has great importance in decreasing neonatal losses and reducing the risk of health problems in the early postpartum period. In this study, changes of rumination time, reticuloruminal pH and temperature were investigated in eutocic (EUT, n = 10) and dystocic (DYS, n = 8) dairy cows around parturition. Rumination time was continuously recorded using an acoustic biotelemetry system, whereas reticuloruminal pH and temperature were recorded using an indwelling and wireless data transmitting system. The recording period lasted from 3 d before calving until 7 days in milk. For the comparison of rumination time and reticuloruminal characteristics between groups, time to return to baseline (the time interval required to return to baseline from the delivery of the calf) and area under the curve (AUC, both for prepartum and postpartum periods) were calculated for each parameter. Rumination time decreased from baseline 28 h before calving both for EUT and DYS cows (P = 0.023 and P = 0.017, respectively). After 20 h before calving, it decreased onwards to reach 32.4 ± 2.3 and 13.2 ± 2.0 min/4 h between 8 and 4 h before delivery in EUT and DYS cows, respectively, and then it decreased below 10 and 5 min during the last 4 h before calving (P = 0.003 and P = 0.008, respectively). Until 12 h after delivery rumination time reached 42.6 ± 2.7 and 51.0 ± 3.1 min/4 h in DYS and EUT dams, respectively, however, AUC and time to return to baseline suggested lower rumination activity in DYS cows than in EUT dams for the 168-h postpartum observational period (P = 0.012 and P = 0.002, respectively). Reticuloruminal pH decreased from baseline 56 h before calving both for EUT and DYS cows (P = 0.012 and P = 0.016, respectively), but did not differ between groups before delivery. In DYS cows, reticuloruminal temperature decreased from baseline 32 h before calving by 0.23 ± 0.02 °C (P = 0.012), whereas in EUT cows such a decrease was found only 20 h before delivery (0.48 ± 0.05 °C, P < 0.01). AUC of reticuloruminal temperature calculated for the prepartum period was greater in EUT cows than in DYS cows (P = 0.042). During the first 4 h after calving, it decreased from 39.7 ± 0.1 to 39.00 ± 0.1 °C and from 39.8 ± 0.1 to 38.8 ± 0.1 °C in EUT and DYS cows, respectively (P < 0.01 for both groups) and reached baseline levels after 35.4 ± 3.4 and 37.8 ± 4.2 h after calving in EUT and DYS cows, respectively. Based on our results, continuous monitoring of changes in rumination time and reticuloruminal temperature seems to be promising in the early detection of cows with a higher risk of dystocia. Depressed postpartum rumination time of DYS cows highlights the importance of the monitoring of cows experiencing difficulties at calving.

Keywords: reticuloruminal pH, reticuloruminal temperature, rumination time, dairy cows, dystocia

Procedia PDF Downloads 308
810 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails

Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali

Abstract:

When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.

Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis

Procedia PDF Downloads 26
809 Non-Destructive Testing of Carbon Fiber Reinforced Plastic by Infrared Thermography Methods

Authors: W. Swiderski

Abstract:

Composite materials are one answer to the growing demand for materials with better parameters of construction and exploitation. Composite materials also permit conscious shaping of desirable properties to increase the extent of reach in the case of metals, ceramics or polymers. In recent years, composite materials have been used widely in aerospace, energy, transportation, medicine, etc. Fiber-reinforced composites including carbon fiber, glass fiber and aramid fiber have become a major structural material. The typical defect during manufacture and operation is delamination damage of layered composites. When delamination damage of the composites spreads, it may lead to a composite fracture. One of the many methods used in non-destructive testing of composites is active infrared thermography. In active thermography, it is necessary to deliver energy to the examined sample in order to obtain significant temperature differences indicating the presence of subsurface anomalies. To detect possible defects in composite materials, different methods of thermal stimulation can be applied to the tested material, these include heating lamps, lasers, eddy currents, microwaves or ultrasounds. The use of a suitable source of thermal stimulation on the test material can have a decisive influence on the detection or failure to detect defects. Samples of multilayer structure carbon composites were prepared with deliberately introduced defects for comparative purposes. Very thin defects of different sizes and shapes made of Teflon or copper having a thickness of 0.1 mm were screened. Non-destructive testing was carried out using the following sources of thermal stimulation, heating lamp, flash lamp, ultrasound and eddy currents. The results are reported in the paper.

Keywords: Non-destructive testing, IR thermography, composite material, thermal stimulation

Procedia PDF Downloads 252
808 The Associations between Ankle and Brachial Systolic Blood Pressures with Obesity Parameters

Authors: Matei Tudor Berceanu, Hema Viswambharan, Kirti Kain, Chew Weng Cheng

Abstract:

Background - Obesity parameters, particularly visceral obesity as measured by the waist-to-height ratio (WHtR), correlate with insulin resistance. The metabolic microvascular changes associated with insulin resistance causes increased peripheral arteriolar resistance primarily to the lower limb vessels. We hypothesize that ankle systolic blood pressures (SBPs) are more significantly associated with visceral obesity than brachial SBPs. Methods - 1098 adults enriched in south Asians or Europeans with diabetes (T2DM) were recruited from a primary care practice in West Yorkshire. Their medical histories, including T2DM and cardiovascular disease (CVD) status, were gathered from an electronic database. The brachial, dorsalis pedis, and posterior tibial SBPs were measured using a Doppler machine. Their body mass index (BMI) and WHtR were calculated after measuring their weight, height, and waist circumference. Linear regressions were performed between the 6 SBPs and both obesity parameters, after adjusting for covariates. Results - Generally, the left posterior tibial SBP (P=4.559*10⁻¹⁵) and right posterior tibial SBP (P=1.114* 10⁻¹³ ) are the pressures most significantly associated with the BMI, as well as in south Asians (P < 0.001) and Europeans (P < 0.001) specifically. In South Asians, although the left (P=0.032) and right brachial SBP (P=0.045) were associated to the WHtR, the left posterior tibial SBP (P=0.023) showed the strongest association. Conclusion - Regardless of ethnicity, ankle SBPs are more significantly associated with generalized obesity than brachial SBPs, suggesting their screening potential for screening for early detection of T2DM and CVD. A combination of ankle SBPs with WHtR is proposed in south Asians.

Keywords: ankle blood pressures, body mass index, insulin resistance, waist-to-height-ratio

Procedia PDF Downloads 133
807 Water Supply and Demand Analysis for Ranchi City under Climate Change Using Water Evaluation and Planning System Model

Authors: Pappu Kumar, Ajai Singh, Anshuman Singh

Abstract:

There are different water user sectors such as rural, urban, mining, subsistence and commercial irrigated agriculture, commercial forestry, industry, power generation which are present in the catchment in Subarnarekha River Basin and Ranchi city. There is an inequity issue in the access to water. The development of the rural area, construction of new power generation plants, along with the population growth, the requirement of unmet water demand and the consideration of environmental flows, the revitalization of small-scale irrigation schemes is going to increase the water demands in almost all the water-stressed catchment. The WEAP Model was developed by the Stockholm Environment Institute (SEI) to enable evaluation of planning and management issues associated with water resources development. The WEAP model can be used for both urban and rural areas and can address a wide range of issues including sectoral demand analyses, water conservation, water rights and allocation priorities, river flow simulation, reservoir operation, ecosystem requirements and project cost-benefit analyses. This model is a tool for integrated water resource management and planning like, forecasting water demand, supply, inflows, outflows, water use, reuse, water quality, priority areas and Hydropower generation, In the present study, efforts have been made to access the utility of the WEAP model for water supply and demand analysis for Ranchi city. A detailed works have been carried out and it was tried to ascertain that the WEAP model used for generating different scenario of water requirement, which could help for the future planning of water. The water supplied to Ranchi city was mostly contributed by our study river, Hatiya reservoir and ground water. Data was collected from various agencies like PHE Ranchi, census data of 2011, Doranda reservoir and meteorology department etc. This collected and generated data was given as input to the WEAP model. The model generated the trends for discharge of our study river up to next 2050 and same time also generated scenarios calculating our demand and supplies for feature. The results generated from the model outputs predicting the water require 12 million litter. The results will help in drafting policies for future regarding water supplies and demands under changing climatic scenarios.

Keywords: WEAP model, water demand analysis, Ranchi, scenarios

Procedia PDF Downloads 414
806 Molecular Epidemiology of Anthrax in Georgia

Authors: N. G. Vepkhvadze, T. Enukidze

Abstract:

Anthrax is a fatal disease caused by strains of Bacillus anthracis, a spore-forming gram-positive bacillus that causes the disease anthrax in animals and humans. Anthrax is a zoonotic disease that is also well-recognized as a potential agent of bioterrorism. Infection in humans is extremely rare in the developed world and is generally due to contact with infected animals or contaminated animal products. Testing of this zoonotic disease began in 1907 in Georgia and is still being tested routinely to provide accurate information and efficient testing results at the State Laboratory of Agriculture of Georgia. Each clinical sample is analyzed by RT-PCR and bacteriology methods; this study used Real-Time PCR assays for the detection of B. anthracis that rely on plasmid-encoded targets with a chromosomal marker to correctly differentiate pathogenic strains from non-anthracis Bacillus species. During the period of 2015-2022, the State Laboratory of Agriculture (SLA) tested 250 clinical and environmental (soil) samples from several different regions in Georgia. In total, 61 out of the 250 samples were positive during this period. Based on the results, Anthrax cases are mostly present in Eastern Georgia, with a high density of the population of livestock, specifically in the regions of Kakheti and Kvemo Kartli. All laboratory activities are being performed in accordance with International Quality standards, adhering to biosafety and biosecurity rules by qualified and experienced personnel handling pathogenic agents. Laboratory testing plays the largest role in diagnosing animals with anthrax, which helps pertinent institutions to quickly confirm a diagnosis of anthrax and evaluate the epidemiological situation that generates important data for further responses.

Keywords: animal disease, baccilus anthracis, edp, laboratory molecular diagnostics

Procedia PDF Downloads 76
805 Co-Creational Model for Blended Learning in a Flipped Classroom Environment Focusing on the Combination of Coding and Drone-Building

Authors: A. Schuchter, M. Promegger

Abstract:

The outbreak of the COVID-19 pandemic has shown us that online education is so much more than just a cool feature for teachers – it is an essential part of modern teaching. In online math teaching, it is common to use tools to share screens, compute and calculate mathematical examples, while the students can watch the process. On the other hand, flipped classroom models are on the rise, with their focus on how students can gather knowledge by watching videos and on the teacher’s use of technological tools for information transfer. This paper proposes a co-educational teaching approach for coding and engineering subjects with the help of drone-building to spark interest in technology and create a platform for knowledge transfer. The project combines aspects from mathematics (matrices, vectors, shaders, trigonometry), physics (force, pressure and rotation) and coding (computational thinking, block-based programming, JavaScript and Python) and makes use of collaborative-shared 3D Modeling with clara.io, where students create mathematics knowhow. The instructor follows a problem-based learning approach and encourages their students to find solutions in their own time and in their own way, which will help them develop new skills intuitively and boost logically structured thinking. The collaborative aspect of working in groups will help the students develop communication skills as well as structural and computational thinking. Students are not just listeners as in traditional classroom settings, but play an active part in creating content together by compiling a Handbook of Knowledge (called “open book”) with examples and solutions. Before students start calculating, they have to write down all their ideas and working steps in full sentences so other students can easily follow their train of thought. Therefore, students will learn to formulate goals, solve problems, and create a ready-to use product with the help of “reverse engineering”, cross-referencing and creative thinking. The work on drones gives the students the opportunity to create a real-life application with a practical purpose, while going through all stages of product development.

Keywords: flipped classroom, co-creational education, coding, making, drones, co-education, ARCS-model, problem-based learning

Procedia PDF Downloads 110
804 Legal Study on the Construction of Olympic and Paralympic Soft Law about Manipulation of Sports Competition

Authors: Clemence Collon, Didier Poracchia

Abstract:

The manipulation of sports competitions is a new type of sports integrity problem. While doping has become an organized, institutionalized struggle, the manipulation of sports competitions is gradually building up. This study aims to describe and understand how the soft Olympic and Paralympic law was gradually built. It also summarizes the legal tools for prevention, detection, and sanction developed by the international Olympic movement. Then, it analyzes the impact of this soft law on the law of the States, in particular in French law. This study is mainly based on an analysis of existing legal literature and non-binding law in the International Olympic and Paralympic movement and on the French National Olympic Committee. Interviews were carried out with experts from the Olympic movement or experts working on combating the manipulation of sports competitions; the answers are also used in this article. The International Olympic Committee has created a supranational legal base to fight against the manipulation of sports competitions. This legal basis must be respected by sports organizations. The Olympic Charter, the Olympic Code of Ethics, the Olympic Movement Code on the prevention of the manipulation of sports competitions, the rules of standards, the basic universal principles, the manuals, the declarations have been published in this perspective. This sports soft law has influences or repercussions in each state. Many states take this new form of integrity problem into account by creating state laws or measures in favor of the fight against sports manipulations. France has so far only a legal basis for manipulation related to betting on sports competitions through the infraction of sports corruption included in the penal code and also created a national platform with various actors to combat this cheating. This legal study highlights the progressive construction of the sports law rules of the Olympic movement in the fight against the manipulation of sports competitions linked to sports betting and their impact on the law of the states.

Keywords: integrity, law and ethics, manipulation of sports competitions, olympic, sports law

Procedia PDF Downloads 147
803 Scalable and Accurate Detection of Pathogens from Whole-Genome Shotgun Sequencing

Authors: Janos Juhasz, Sandor Pongor, Balazs Ligeti

Abstract:

Next-generation sequencing, especially whole genome shotgun sequencing, is becoming a common approach to gain insight into the microbiomes in a culture-independent way, even in clinical practice. It does not only give us information about the species composition of an environmental sample but opens the possibility to detect antimicrobial resistance and novel, or currently unknown, pathogens. Accurately and reliably detecting the microbial strains is a challenging task. Here we present a sensitive approach for detecting pathogens in metagenomics samples with special regard to detecting novel variants of known pathogens. We have developed a pipeline that uses fast, short read aligner programs (i.e., Bowtie2/BWA) and comprehensive nucleotide databases. Taxonomic binning is based on the lowest common ancestor (LCA) principle; each read is assigned to a taxon, covering the most significantly hit taxa. This approach helps in balancing between sensitivity and running time. The program was tested both on experimental and synthetic data. The results implicate that our method performs as good as the state-of-the-art BLAST-based ones, furthermore, in some cases, it even proves to be better, while running two orders magnitude faster. It is sensitive and capable of identifying taxa being present only in small abundance. Moreover, it needs two orders of magnitude less reads to complete the identification than MetaPhLan2 does. We analyzed an experimental anthrax dataset (B. anthracis strain BA104). The majority of the reads (96.50%) was classified as Bacillus anthracis, a small portion, 1.2%, was classified as other species from the Bacillus genus. We demonstrate that the evaluation of high-throughput sequencing data is feasible in a reasonable time with good classification accuracy.

Keywords: metagenomics, taxonomy binning, pathogens, microbiome, B. anthracis

Procedia PDF Downloads 124
802 Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks

Authors: Bahareh Golchin, Nooshin Riahi

Abstract:

One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.

Keywords: emotion classification, sentiment analysis, social networks, deep neural networks

Procedia PDF Downloads 131
801 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data

Authors: Huinan Zhang, Wenjie Jiang

Abstract:

Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.

Keywords: Artificial intelligence, deep learning, data mining, remote sensing

Procedia PDF Downloads 45
800 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.

Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction

Procedia PDF Downloads 84
799 Personalized Infectious Disease Risk Prediction System: A Knowledge Model

Authors: Retno A. Vinarti, Lucy M. Hederman

Abstract:

This research describes a knowledge model for a system which give personalized alert to users about infectious disease risks in the context of weather, location and time. The knowledge model is based on established epidemiological concepts augmented by information gleaned from infection-related data repositories. The existing disease risk prediction research has more focuses on utilizing raw historical data and yield seasonal patterns of infectious disease risk emergence. This research incorporates both data and epidemiological concepts gathered from Atlas of Human Infectious Disease (AHID) and Centre of Disease Control (CDC) as basic reasoning of infectious disease risk prediction. Using CommonKADS methodology, the disease risk prediction task is an assignment synthetic task, starting from knowledge identification through specification, refinement to implementation. First, knowledge is gathered from AHID primarily from the epidemiology and risk group chapters for each infectious disease. The result of this stage is five major elements (Person, Infectious Disease, Weather, Location and Time) and their properties. At the knowledge specification stage, the initial tree model of each element and detailed relationships are produced. This research also includes a validation step as part of knowledge refinement: on the basis that the best model is formed using the most common features, Frequency-based Selection (FBS) is applied. The portion of the Infectious Disease risk model relating to Person comes out strongest, with Location next, and Weather weaker. For Person attribute, Age is the strongest, Activity and Habits are moderate, and Blood type is weakest. At the Location attribute, General category (e.g. continents, region, country, and island) results much stronger than Specific category (i.e. terrain feature). For Weather attribute, Less Precise category (i.e. season) comes out stronger than Precise category (i.e. exact temperature or humidity interval). However, given that some infectious diseases are significantly more serious than others, a frequency based metric may not be appropriate. Future work will incorporate epidemiological measurements of disease seriousness (e.g. odds ratio, hazard ratio and fatality rate) into the validation metrics. This research is limited to modelling existing knowledge about epidemiology and chain of infection concepts. Further step, verification in knowledge refinement stage, might cause some minor changes on the shape of tree.

Keywords: epidemiology, knowledge modelling, infectious disease, prediction, risk

Procedia PDF Downloads 229
798 Cas9-Assisted Direct Cloning and Refactoring of a Silent Biosynthetic Gene Cluster

Authors: Peng Hou

Abstract:

Natural products produced from marine bacteria serve as an immense reservoir for anti-infective drugs and therapeutic agents. Nowadays, heterologous expression of gene clusters of interests has been widely adopted as an effective strategy for natural product discovery. Briefly, the heterologous expression flowchart would be: biosynthetic gene cluster identification, pathway construction and expression, and product detection. However, gene cluster capture using traditional Transformation-associated recombination (TAR) protocol is low-efficient (0.5% positive colony rate). To make things worse, most of these putative new natural products are only predicted by bioinformatics analysis such as antiSMASH, and their corresponding natural products biosynthetic pathways are either not expressed or expressed at very low levels under laboratory conditions. Those setbacks have inspired us to focus on seeking new technologies to efficiently edit and refractor of biosynthetic gene clusters. Recently, two cutting-edge techniques have attracted our attention - the CRISPR-Cas9 and Gibson Assembly. By now, we have tried to pretreat Brevibacillus laterosporus strain genomic DNA with CRISPR-Cas9 nucleases that specifically generated breaks near the gene cluster of interest. This trial resulted in an increase in the efficiency of gene cluster capture (9%). Moreover, using Gibson Assembly by adding/deleting certain operon and tailoring enzymes regardless of end compatibility, the silent construct (~80kb) has been successfully refactored into an active one, yielded a series of analogs expected. With the appearances of the novel molecular tools, we are confident to believe that development of a high throughput mature pipeline for DNA assembly, transformation, product isolation and identification would no longer be a daydream for marine natural product discovery.

Keywords: biosynthesis, CRISPR-Cas9, DNA assembly, refactor, TAR cloning

Procedia PDF Downloads 267
797 Receptor-Independent Effects of Endocannabinoid Anandamide on Contractility and Electrophysiological Properties of Rat Ventricular Myocytes

Authors: Lina T. Al Kury, Oleg I. Voitychuk, Ramiz M. Ali, Sehamuddin Galadari, Keun-Hang Susan Yang, Frank Christopher Howarth, Yaroslav M. Shuba, Murat Oz

Abstract:

A role for anandamide (N-arachidonoyl ethanolamide; AEA), a major endocannabinoid, in the cardiovascular system in various pathological conditions has been reported in earlier studies. In the present work, we have hypothesized that the antiarrhythmic effects reported for AEA are due to its negative inotropic effect and altered action potential (AP) characteristics. Therefore, we tested the effects of AEA on contractility and electrophysiological properties of rat ventricular myocytes. Video edge detection was used to measure myocyte shortening. Intracellular Ca2+ was measured in cells loaded with the fluorescent indicator fura-2 AM. Whole-cell patch-clamp technique was employed to investigate the effect of AEA on the characteristics of APs. AEA (1 μM) caused a significant decrease in the amplitudes of electrically-evoked myocyte shortening and Ca2+ transients and significantly decreased the duration of AP. The effect of AEA on myocyte shortening and AP characteristics was not altered in the presence of pertussis toxin (PTX, 2 µg/ml for 4 h), AM251 and SR141716 (cannabinoid type 1 receptor antagonists) or AM630 and SR 144528 (cannabinoid type 2 receptor antagonists). Furthermore, AEA inhibited voltage-activated inward Na+ (INa) and Ca2+ (IL,Ca) currents; major ionic currents shaping the APs in ventricular myocytes, in a voltage and PTX-independent manner. Collectively, the results suggest that AEA depresses ventricular myocyte contractility, by decreasing the action potential duration (APD), and inhibits the function of voltage-dependent Na+ and L-type Ca2+ channels in a manner independent of cannabinoid receptors. This mechanism may be importantly involved in the antiarrhythmic effects of anandamide.

Keywords: action potential, anandamide, cannabinoid receptor, endocannabinoid, ventricular myocytes

Procedia PDF Downloads 347
796 Sensor and Actuator Fault Detection in Connected Vehicles under a Packet Dropping Network

Authors: Z. Abdollahi Biron, P. Pisu

Abstract:

Connected vehicles are one of the promising technologies for future Intelligent Transportation Systems (ITS). A connected vehicle system is essentially a set of vehicles communicating through a network to exchange their information with each other and the infrastructure. Although this interconnection of the vehicles can be potentially beneficial in creating an efficient, sustainable, and green transportation system, a set of safety and reliability challenges come out with this technology. The first challenge arises from the information loss due to unreliable communication network which affects the control/management system of the individual vehicles and the overall system. Such scenario may lead to degraded or even unsafe operation which could be potentially catastrophic. Secondly, faulty sensors and actuators can affect the individual vehicle’s safe operation and in turn will create a potentially unsafe node in the vehicular network. Further, sending that faulty sensor information to other vehicles and failure in actuators may significantly affect the safe operation of the overall vehicular network. Therefore, it is of utmost importance to take these issues into consideration while designing the control/management algorithms of the individual vehicles as a part of connected vehicle system. In this paper, we consider a connected vehicle system under Co-operative Adaptive Cruise Control (CACC) and propose a fault diagnosis scheme that deals with these aforementioned challenges. Specifically, the conventional CACC algorithm is modified by adding a Kalman filter-based estimation algorithm to suppress the effect of lost information under unreliable network. Further, a sliding mode observer-based algorithm is used to improve the sensor reliability under faults. The effectiveness of the overall diagnostic scheme is verified via simulation studies.

Keywords: fault diagnostics, communication network, connected vehicles, packet drop out, platoon

Procedia PDF Downloads 230
795 Interdisciplinary Method Development - A Way to Realize the Full Potential of Textile Resources

Authors: Nynne Nørup, Julie Helles Eriksen, Rikke M. Moalem, Else Skjold

Abstract:

Despite a growing focus on the high environmental impact of textiles, textile waste is only recently considered as part of the waste field. Consequently, there is a general lack of knowledge and data within this field. Particularly the lack of a common perception of textiles generates several problems e.g., to recognize the full material potential the fraction contains, which is cruel if the textile must enter the circular economy. This study aims to qualify a method to make the resources in textile waste visible in a way that makes it possible to move them as high up in the waste hierarchy as possible. Textiles are complex and cover many different types of products, fibers and combinations of fibers and production methods. In garments alone, there is a great variety, even when narrowing it to only undergarments. However, textile waste is often reduced to one fraction, assessed solely by quantity, and compared to quantities of other waste fractions. Disregarding the complexity and reducing textiles to a single fraction that covers everything made of textiles increase the risk of neglecting the value of the materials, both with regards to their properties and economical. Instead of trying to fit textile waste into the current primarily linear waste system where volume is a key part of the business models, this study focused on integrating textile waste as a resource in the design and production phase. The study combined interdisciplinary methods for determining replacement rates used in Life Cycle Assessments and Mass Flow Analysis methods with the designer’s toolbox to hereby activate the properties of textile waste in a way that can unleash its potential optimally. It was hypothesized that by activating Denmark's tradition for design and high level of craftsmanship, it is possible to find solutions that can be used today and create circular resource models that reduce the use of virgin fibers. Through waste samples, case studies, and testing of various design approaches, this study explored how to functionalize the method so that the product after the end-use is kept as a material and only then processed at fiber level to obtain the best environmental utilization. The study showed that the designers' ability to decode the properties of the materials and understanding of craftsmanship were decisive for how well the materials could be utilized today. The later in the life cycle the textiles appeared as waste, the more demanding the description of the materials to be sufficient, especially if to achieve the best possible use of the resources and thus a higher replacement rate. In addition, it also required adaptation in relation to the current production because the materials often varied more. The study found good indications that part of the solution is to use geodata i.e., where in the life cycle the materials were discarded. An important conclusion is that a fully developed method can help support better utilization of textile resources. However, it stills requires a better understanding of materials by the designers, as well as structural changes in business and society.

Keywords: circular economy, development of sustainable processes, environmental impacts, environmental management of textiles, environmental sustainability through textile recycling, interdisciplinary method development, resource optimization, recycled textile materials and the evaluation of recycling, sustainability and recycling opportunities in the textile and apparel sector

Procedia PDF Downloads 85
794 Detection and Expression of Peroxidase Genes in Trichoderma harzianum KY488466 and Its Response to Crude Oil Degradation

Authors: Michael Dare Asemoloye, Segun Gbolagade Jonathan, Rafiq Ahmad, Odunayo Joseph Olawuyi, D. O. Adejoye

Abstract:

Fungi have potentials for degrading hydrocarbons through the secretion of different enzymes. Crude oil tolerance and degradation by Trichoderma harzianum was investigated in this study with its ability to produce peroxidase enzymes (LiP and MnP). Many fungal strains were isolated from rhizosphere of grasses growing on a crude oil spilled site, and the most frequent strain based on percentage incidence was further characterized using morphological and molecular characteristics. Molecular characterization was done through the amplification of Ribosomal-RNA regions of 18s (1609-1627) and 28s (287-266) using ITS1 and ITS4 combinations and it was identified using NCBI BLAST tool. The selected fungus was also subjected to an in-vitro tolerance test at crude oil concentrations of 5, 10, 15, 20 and 25% while 0% served as control. In addition, lignin peroxidase genes (lig1-6) and manganese peroxidase gene (mnp) were detected and expressed in this strain using RT-PCR technique, its peroxidase producing activities was also studied in aliquots (U/ml). This strain had highest incidence of 80%, it was registered in NCBI as Trichoderma harzianum asemoJ KY488466. The strain KY488466 responded to crude oil concentrations as it increase, the dose inhibition response percentage (DIRP) increased from 41.67 to 95.41 at 5 to 25 % crude oil concentrations. All the peroxidase genes are present in KY488466, and expressed with amplified 900-1000 bp through RT-PCR technique. In this strain, lig2, lig4 and mnp genes were over-expressed, lig 6 was moderately expressed, while none of the genes was under-expressed. The strain also produced 90±0.87 U/ml lignin peroxidase and 120±1.23 U/mil manganese peroxidase enzymes in aliquots. These results imply that KY488466 can tolerate and survive high crude oil concentration and could be exploited for bioremediation of oil-spilled soils, the produced peroxidase enzymes could also be exploited for other biotechnological experiments.

Keywords: crude oil, enzymes, expression, peroxidase genes, tolerance, Trichoderma harzianum

Procedia PDF Downloads 214
793 On Grammatical Metaphors: A Corpus-Based Reflection on the Academic Texts Written in the Field of Environmental Management

Authors: Masoomeh Estaji, Ahdie Tahamtani

Abstract:

Considering the necessity of conducting research and publishing academic papers during Master’s and Ph.D. programs, graduate students are in dire need of improving their writing skills through either writing courses or self-study planning. One key feature that could aid academic papers to look more sophisticated is the application of grammatical metaphors (GMs). These types of metaphors represent the ‘non-congruent’ and ‘implicit’ ways of decoding meaning through which one grammatical category is replaced by another, more implied counterpart, which can alter the readers’ understanding of the text as well. Although a number of studies have been conducted on the application of GMs across various disciplines, almost none has been devoted to the field of environmental management, and the scope of the previous studies has been relatively limited compared to the present work. In the current study, attempts were made to analyze different types of GMs used in academic papers published in top-tiered journals in the field of environmental management, and make a list of the most frequently used GMs based on their functions in this particular discipline to make the teaching of academic writing courses more explicit and the composition of academic texts more well-structured. To fulfill these purposes, a corpus-based analysis based on the two theoretical models of Martin et al. (1997) and Liardet (2014) was run. Through two stages of manual analysis and concordancers, ten recent academic articles entailing 132490 words published in two prestigious journals were precisely scrutinized. The results yielded that through the whole IMRaD sections of the articles, among all types of ideational GMs, material processes were the most frequent types. The second and the third ranks would apply to the relational and mental categories, respectively. Regarding the use of interpersonal GMs, objective expanding metaphors were the highest in number. In contrast, subjective interpersonal metaphors, either expanding or contracting, were the least significant. This would suggest that scholars in the field of Environmental Management tended to shift the focus on the main procedures and explain technical phenomenon in detail, rather than to compare and contrast other statements and subjective beliefs. Moreover, since no instances of verbal ideational metaphors were detected, it could be deduced that the act of ‘saying or articulating’ something might be against the standards of the academic genre. One other assumption would be that the application of ideational GMs is context-embedded and that the more technical they are, the least frequent they become. For further studies, it is suggested that the employment of GMs to be studied in a wider scope and other disciplines, and the third type of GMs known as ‘textual’ metaphors to be included as well.

Keywords: English for specific purposes, grammatical metaphor, academic texts, corpus-based analysis

Procedia PDF Downloads 159
792 Cross-Sectional Study of Critical Parameters on RSET and Decision-Making of At-Risk Groups in Fire Evacuation

Authors: Naser Kazemi Eilaki, Ilona Heldal, Carolyn Ahmer, Bjarne Christian Hagen

Abstract:

Elderly people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to a safe place. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. While earlier studies have frequently addressed quantitative measurements regarding at-risk groups' physical characteristics (e.g., their speed of travel), this paper considers the influence of at-risk groups’ characteristics on their decision and determining better escape routes. Most of evacuation models are based on mapping people's movement and their behaviour to summation times for common activity types on a timeline. Usually, timeline models estimate required safe egress time (RSET) as a sum of four timespans: detection, alarm, premovement, and movement time, and compare this with the available safe egress time (ASET) to determine what is influencing the margin of safety.This paper presents a cross-sectional study for identifying the most critical items on RSET and people's decision-making and with possibilities to include safety knowledge regarding people with physical or cognitive functional impairments. The result will contribute to increased knowledge on considering at-risk groups and disabilities for designing and developing safe escape routes. The expected results can be an asset to predict the probabilistic behavioural pattern of at-risk groups and necessary components for defining a framework for understanding how stakeholders can consider various disabilities when determining the margin of safety for a safe escape route.

Keywords: fire safety, evacuation, decision-making, at-risk groups

Procedia PDF Downloads 92
791 Complaint Management Mechanism: A Workplace Solution in Development Sector of Bangladesh

Authors: Nusrat Zabeen Islam

Abstract:

Partnership between local Non-Government organizations (NGO) and International development organizations has become an important feature in the development sector of Bangladesh. It is an important challenge for International development organizations to work with local NGOs with proper HR practice. Local NGOs have a lack of quality working environment and this affects the employee’s work experiences and overall performance at individual, partnership with International development organizations and organizational level. Many local development organizations due to the size of the organization and scope do not have a human resource (HR) unit. Inadequate Human Resource Policies, skills, leadership and lack of effective strategy is now a common scenario in Non-Government organization sector of Bangladesh. So corruption, nepotism, and fraud, risk of Political Contribution in office /work space, Sexual/ gender based abuse, insecurity take place in work place of development sector. The Complaint Management Mechanism (CMM) in human resource management could be one way to improve human resource competence in these organizations. The responsibility of Complaint Management Unit (CMU) of an International development organization is to make workplace maltreating, discriminating communities free. The information of impact of CMM was collected through case study of an International organization and some of its partner national organizations in Bangladesh who are engaged in different projects/programs. In this mechanism International development organizations collect complaints from beneficiaries/ staffs by complaint management unit and investigate by segregating the type and mood of the complaint and find out solution to improve the situation within a very short period. A complaint management committee is formed jointly with HR and management personnel. Concerned focal point collect complaints and share with CM unit. By conducting investigation, review of findings, reply back to CM unit and implementation of resolution through this mechanism, a successful bridge of communication and feedback can be established within beneficiaries, staffs and upper management. The overall result of Complaint management mechanism application indicates that by applying CMM accountability and transparency of workplace and workforce in development organization can be increased significantly. Evaluations based on outcomes, and measuring indicators such as productivity, satisfaction, retention, gender equity, proper judgment will guide organizations in building a healthy workforce, and will also clearly articulate the return on investment and justify any need for further funding.

Keywords: human resource management in NGOs, challenges in human resource, workplace environment, complaint management mechanism

Procedia PDF Downloads 313
790 Interpersonal Variation of Salivary Microbiota Using Denaturing Gradient Gel Electrophoresis

Authors: Manjula Weerasekera, Chris Sissons, Lisa Wong, Sally Anderson, Ann Holmes, Richard Cannon

Abstract:

The aim of this study was to characterize bacterial population and yeasts in saliva by Polymerase chain reaction followed by denaturing gradient gel electrophoresis (PCR-DGGE) and measure yeast levels by culture. PCR-DGGE was performed to identify oral bacteria and yeasts in 24 saliva samples. DNA was extracted and used to generate DNA amplicons of the V2–V3 hypervariable region of the bacterial 16S rDNA gene using PCR. Further universal primers targeting the large subunit rDNA gene (25S-28S) of fungi were used to amplify yeasts present in human saliva. Resulting PCR products were subjected to denaturing gradient gel electrophoresis using Universal mutation detection system. DGGE bands were extracted and sequenced using Sanger method. A potential relationship was evaluated between groups of bacteria identified by cluster analysis of DGGE fingerprints with the yeast levels and with their diversity. Significant interpersonal variation of salivary microbiome was observed. Cluster and principal component analysis of the bacterial DGGE patterns yielded three significant major clusters, and outliers. Seventeen of the 24 (71%) saliva samples were yeast positive going up to 10³ cfu/mL. Predominately, C. albicans, and six other species of yeast were detected. The presence, amount and species of yeast showed no clear relationship to the bacterial clusters. Microbial community in saliva showed a significant variation between individuals. A lack of association between yeasts and the bacterial fingerprints in saliva suggests the significant ecological person-specific independence in highly complex oral biofilm systems under normal oral conditions.

Keywords: bacteria, denaturing gradient gel electrophoresis, oral biofilm, yeasts

Procedia PDF Downloads 215
789 Learning with Music: The Effects of Musical Tension on Long-Term Declarative Memory Formation

Authors: Nawras Kurzom, Avi Mendelsohn

Abstract:

The effects of background music on learning and memory are inconsistent, partly due to the intrinsic complexity and variety of music and partly to individual differences in music perception and preference. A prominent musical feature that is known to elicit strong emotional responses is musical tension. Musical tension can be brought about by building anticipation of rhythm, harmony, melody, and dynamics. Delaying the resolution of dominant-to-tonic chord progressions, as well as using dissonant harmonics, can elicit feelings of tension, which can, in turn, affect memory formation of concomitant information. The aim of the presented studies was to explore how forming declarative memory is influenced by musical tension, brought about within continuous music as well as in the form of isolated chords with varying degrees of dissonance/consonance. The effects of musical tension on long-term memory of declarative information were studied in two ways: 1) by evoking tension within continuous music pieces by delaying the release of harmonic progressions from dominant to tonic chords, and 2) by using isolated single complex chords with various degrees of dissonance/roughness. Musical tension was validated through subjective reports of tension, as well as physiological measurements of skin conductance response (SCR) and pupil dilation responses to the chords. In addition, music information retrieval (MIR) was used to quantify musical properties associated with tension and its release. Each experiment included an encoding phase, wherein individuals studied stimuli (words or images) with different musical conditions. Memory for the studied stimuli was tested 24 hours later via recognition tasks. In three separate experiments, we found positive relationships between tension perception and physiological measurements of SCR and pupil dilation. As for memory performance, we found that background music, in general, led to superior memory performance as compared to silence. We detected a trade-off effect between tension perception and memory, such that individuals who perceived musical tension as such displayed reduced memory performance for images encoded during musical tension, whereas tense music benefited memory for those who were less sensitive to the perception of musical tension. Musical tension exerts complex interactions with perception, emotional responses, and cognitive performance on individuals with and without musical training. Delineating the conditions and mechanisms that underlie the interactions between musical tension and memory can benefit our understanding of musical perception at large and the diverse effects that music has on ongoing processing of declarative information.

Keywords: musical tension, declarative memory, learning and memory, musical perception

Procedia PDF Downloads 89
788 Effect of the Diverse Standardized Patient Simulation Cultural Competence Education Strategy on Nursing Students' Transcultural Self-Efficacy Perceptions

Authors: Eda Ozkara San

Abstract:

Nurse educators have been charged by several nursing organizations and accrediting bodies to provide innovative and evidence-based educational experiences, both didactic and clinical, to help students to develop the knowledge, skills, and attitudes needed to provide culturally competent nursing care to patients. Clinical simulation, which offers the opportunity for students to practice nursing skills in a risk-free, controlled environment and helps develop self-efficacy (confidence) within the nursing role. As one simulation method, the standardized patients (SPs) simulation helps educators to teach nursing students variety of skills in nursing, medicine, and other health professions. It can be a helpful tool for nurse educators to enhance cultural competence of nursing students. An alarming gap exists within the literature concerning the effectiveness of SP strategy to enhance cultural competence development of diverse student groups, who must work with patients from various backgrounds. This grant-supported, longitudinal, one-group, pretest and post-test educational intervention study aimed to examine the effect of the Diverse Standardized Patient Simulation (DSPS) cultural competence education strategy on students’ (n = 53) transcultural self-efficacy (TSE). The researcher-developed multidimensional DSPS strategy involved careful integration of transcultural nursing skills guided by the Cultural Competence and Confidence (CCC) model. As a carefully orchestrated teaching and learning strategy by specifically utilizing the SP pedagogy, the DSPS also followed international guidelines and standards for the design, implementation, evaluation, and SP training; and had content validity review. The DSPS strategy involved two simulation scenarios targeting underrepresented patient populations (Muslim immigrant woman with limited English proficiency and Irish-Italian American gay man with his partner (Puerto Rican) to be utilized in a second-semester, nine-credit, 15-week medical-surgical nursing course at an urban public US university. Five doctorally prepared content experts reviewed the DSPS strategy for content validity. The item-level content validity index (I-CVI) score was calculated between .80-1.0 on the evaluation forms. Jeffreys’ Transcultural Self-Efficacy Tool (TSET) was administered as a pretest and post-test to assess students’ changes in cognitive, practical, and affective dimensions of TSE. Results gained from this study support that the DSPS cultural competence education strategy assisted students to develop cultural competence and caused statistically significant changes (increase) in students’ TSE perceptions. Results also supported that all students, regardless of their background, benefit (and require) well designed cultural competence education strategies. The multidimensional DSPS strategy is found to be an effective way to foster nursing students’ cultural competence development. Step-by-step description of the DSPS provides an easy adaptation of this strategy with different student populations and settings.

Keywords: cultural competence development, the cultural competence and confidence model, CCC model, educational intervention, transcultural self-efficacy, TSE, transcultural self-efficacy tool, TSET

Procedia PDF Downloads 141
787 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 44
786 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 135
785 Synthesis of Pd@ Cu Core−Shell Nanowires by Galvanic Displacement of Cu by Pd²⁺ Ions as a Modified Glassy Carbon Electrode for the Simultaneous Determination of Dihydroxybenzene Isomers Speciation

Authors: Majid Farsadrouh Rashti, Parisa Jahani, Amir Shafiee, Mehrdad Mofidi

Abstract:

The dihydroxybenzene isomers, hydroquinone (HQ), catechol (CC) and resorcinol (RS) have been widely recognized as important environmental pollutants due to their toxicity and low degradability in the ecological environment. Speciation of HQ, CC and RS is very important for environmental analysis because they co-exist of these isomers in environmental samples and are too difficult to degrade as an environmental contaminant with high toxicity. There are many analytical methods have been reported for detecting these isomers, such as spectrophotometry, fluorescence, High-performance liquid chromatography (HPLC) and electrochemical methods. These methods have attractive advantages such as simple and fast response, low maintenance costs, wide linear analysis range, high efficiency, excellent selectivity and high sensitivity. A novel modified glassy carbon electrode (GCE) with Pd@ Cu/CNTs core−shell nanowires for the simultaneous determination of hydroquinone (HQ), catechol (CC) and resorcinol (RS) is described. A detailed investigation by field emission scanning electron microscopy and electrochemistry was performed in order to elucidate the preparation process and properties of the GCE/ Pd/CuNWs-CNTs. The electrochemical response characteristic of the modified GPE/LFOR toward HQ, CC and RS were investigated by cyclic voltammetry, differential pulse voltammetry (DPV) and Chronoamperometry. Under optimum conditions, the calibrations curves were linear up to 228 µM for each with detection limits of 0.4, 0.6 and 0.8 µM for HQ, CC and RS, respectively. The diffusion coefficient for the oxidation of HQ, CC and RS at the modified electrode was calculated as 6.5×10⁻⁵, 1.6 ×10⁻⁵ and 8.5 ×10⁻⁵ cm² s⁻¹, respectively. DPV was used for the simultaneous determination of HQ, CC and RS at the modified electrode and the relative standard deviations were 2.1%, 1.9% and 1.7% for HQ, CC and RS, respectively. Moreover, GCE/Pd/CuNWs-CNTs was successfully used for determination of HQ, CC and RS in real samples.

Keywords: dihydroxybenzene isomers, galvanized copper nanowires, electrochemical sensor, Palladium, speciation

Procedia PDF Downloads 124
784 Comparative Electrochemical Studies of Enzyme-Based and Enzyme-less Graphene Oxide-Based Nanocomposite as Glucose Biosensor

Authors: Chetna Tyagi. G. B. V. S. Lakshmi, Ambuj Tripathi, D. K. Avasthi

Abstract:

Graphene oxide provides a good host matrix for preparing nanocomposites due to the different functional groups attached to its edges and planes. Being biocompatible, it is used in therapeutic applications. As enzyme-based biosensor requires complicated enzyme purification procedure, high fabrication cost and special storage conditions, we need enzyme-less biosensors for use even in a harsh environment like high temperature, varying pH, etc. In this work, we have prepared both enzyme-based and enzyme-less graphene oxide-based biosensors for glucose detection using glucose-oxidase as enzyme and gold nanoparticles, respectively. These samples were characterized using X-ray diffraction, UV-visible spectroscopy, scanning electron microscopy, and transmission electron microscopy to confirm the successful synthesis of the working electrodes. Electrochemical measurements were performed for both the working electrodes using a 3-electrode electrochemical cell. Cyclic voltammetry curves showed the homogeneous transfer of electron on the electrodes in the scan range between -0.2V to 0.6V. The sensing measurements were performed using differential pulse voltammetry for the glucose concentration varying from 0.01 mM to 20 mM, and sensing was improved towards glucose in the presence of gold nanoparticles. Gold nanoparticles in graphene oxide nanocomposite played an important role in sensing glucose in the absence of enzyme, glucose oxidase, as evident from these measurements. The selectivity was tested by measuring the current response of the working electrode towards glucose in the presence of the other common interfering agents like cholesterol, ascorbic acid, citric acid, and urea. The enzyme-less working electrode also showed storage stability for up to 15 weeks, making it a suitable glucose biosensor.

Keywords: electrochemical, enzyme-less, glucose, gold nanoparticles, graphene oxide, nanocomposite

Procedia PDF Downloads 134