Search results for: cluster model approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26820

Search results for: cluster model approach

18210 Engineering of Reagentless Fluorescence Biosensors Based on Single-Chain Antibody Fragments

Authors: Christian Fercher, Jiaul Islam, Simon R. Corrie

Abstract:

Fluorescence-based immunodiagnostics are an emerging field in biosensor development and exhibit several advantages over traditional detection methods. While various affinity biosensors have been developed to generate a fluorescence signal upon sensing varying concentrations of analytes, reagentless, reversible, and continuous monitoring of complex biological samples remains challenging. Here, we aimed to genetically engineer biosensors based on single-chain antibody fragments (scFv) that are site-specifically labeled with environmentally sensitive fluorescent unnatural amino acids (UAA). A rational design approach resulted in quantifiable analyte-dependent changes in peak fluorescence emission wavelength and enabled antigen detection in vitro. Incorporation of a polarity indicator within the topological neighborhood of the antigen-binding interface generated a titratable wavelength blueshift with nanomolar detection limits. In order to ensure continuous analyte monitoring, scFv candidates with fast binding and dissociation kinetics were selected from a genetic library employing a high-throughput phage display and affinity screening approach. Initial rankings were further refined towards rapid dissociation kinetics using bio-layer interferometry (BLI) and surface plasmon resonance (SPR). The most promising candidates were expressed, purified to homogeneity, and tested for their potential to detect biomarkers in a continuous microfluidic-based assay. Variations of dissociation kinetics within an order of magnitude were achieved without compromising the specificity of the antibody fragments. This approach is generally applicable to numerous antibody/antigen combinations and currently awaits integration in a wide range of assay platforms for one-step protein quantification.

Keywords: antibody engineering, biosensor, phage display, unnatural amino acids

Procedia PDF Downloads 131
18209 Coupling Large Language Models with Disaster Knowledge Graphs for Intelligent Construction

Authors: Zhengrong Wu, Haibo Yang

Abstract:

In the context of escalating global climate change and environmental degradation, the complexity and frequency of natural disasters are continually increasing. Confronted with an abundance of information regarding natural disasters, traditional knowledge graph construction methods, which heavily rely on grammatical rules and prior knowledge, demonstrate suboptimal performance in processing complex, multi-source disaster information. This study, drawing upon past natural disaster reports, disaster-related literature in both English and Chinese, and data from various disaster monitoring stations, constructs question-answer templates based on large language models. Utilizing the P-Tune method, the ChatGLM2-6B model is fine-tuned, leading to the development of a disaster knowledge graph based on large language models. This serves as a knowledge database support for disaster emergency response.

Keywords: large language model, knowledge graph, disaster, deep learning

Procedia PDF Downloads 37
18208 Rethink Urban Resilience: An Introductory Study Towards Resilient Spatial Structure of Refugees Neighborhoods

Authors: Salwa Mohammad Alawneh

Abstract:

The ongoing humanitarian crises spur rapid and unpredicted refugee influxes resulting in demographic changes in cities. Regarding different urban systems are vulnerable in refugee neighborhoods. With the consequent social, economic, and spatial challenges, cities must respond with a more durable and sustainable approach based on urban resilience. The paper systematically approaches urban resilience to contribute to refugee spaces by reflecting on the overall urban systems of their neighborhoods. The research will review the urban resilience literature to develop an evaluation framework. The developed framework applies urban resilience more holistically in refugee neighborhoods and expands to the urban systems of social, economic, and spatial. However, the main highlight of this paper is the resilient spatial structure in refugee neighborhoods to face the internal and complex stress of refugee waves and their demographic changes. Finding a set of resilient spatial measurements and focusing on urban forms at a neighborhood scale provide vulnerability reduction and enhance adaptation capacity. As a model example, the paper applies these measurements and facilitates geospatial technologies to one of the refugee neighborhoods in Amman, Jordan, namely Al-Jubilee. The application in Al-Jubilee helps to demonstrate a road map towards a developmental pattern in design and planning by different decision-makers of inter-governmental and humanitarian organizations. In this regard, urban resilience improves the humanitarian assistantship of refugee settings beyond providing the essential needs. In conclusion, urban resilience responds to the different challenges of refugee neighborhoods by supporting urban stability, improving livability, and maintaining both urban functions and security.

Keywords: urban resilience of refugee, resilient urban form, refugee neighborhoods, humanitarian assistantship, refugee in Jordan

Procedia PDF Downloads 144
18207 Effect of Non-Tariff Measures to Indonesian Shrimp Export in International Market: Case of Sanitary and Phytosanitary and Technical Barriers to Trade

Authors: Muhammad Khaliqi, Amzul Rifin, Andriyono Kilat Adhi

Abstract:

The non-tariff policy could make Indonesian shrimp exports decrease in the international market. This research was aimed to analyze factors affecting Indonesia's exports of shrimp and the impact of SPS and TBT policy on Indonesian shrimp. Factors affecting the exports of Indonesian shrimp were estimated using gravity model. The results showed the GDP of exporters and exchange rate, have a negative influence against the export of Indonesia’s shrimp exports. The GDP of the importers and trade cost have a positive influence against the export of shrimp Indonesia while the SPS policy and TBT don’t affect Indonesia's exports of shrimp in the international market.

Keywords: gravity model, international trade, non-tariff measure, sanitary and phytosanitary, shrimp, technical barriers to trade

Procedia PDF Downloads 181
18206 Basic Study on a Thermal Model for Evaluating The Environment of Infant Facilities

Authors: Xin Yuan, Yuji Ryu

Abstract:

The indoor environment has a significant impact on occupants and a suitable indoor thermal environment can improve the children’s physical health and study efficiency during school hours. In this study, we explored the thermal environment in infant facilities classrooms for infants and children aged 1-5 and evaluated their thermal comfort. An infant facility in Fukuoka, Japan was selected for a case study to capture the infant and children’s thermal comfort characteristics in summer and winter from August 2019 to February 2020. Previous studies have pointed out using PMV indices to evaluate the thermal comfort for children could create errors that may lead to misleading results. Thus, to grasp the actual thermal environment and thermal comfort characteristics of infants and children, we retrieved the operative temperature of each child through the thermal model, based on the sensible heat transfer from the skin to the environment, and the measured classroom indoor temperature, relative humidity, and pocket temperature of children’s shorts. The statistical and comparative analysis of the results shows that (1) the operative temperature showed a large individual difference among children, with the maximum reached 6.25 °C. (2) The children might feel slightly cold in the classrooms in summer, with the frequencies of operative temperature within the interval of 26-28 ºC were only 5.33% and 16.6% for children respectively. (3) The thermal environment around children is more complicated in winter the operative temperature could exceed or fail to reach the thermal comfort temperature zone (20-23 ºC interval). (4) The environmental conditions surrounding the children may account for the reduction of their thermal comfort. The findings contribute to improving the understanding of the infant and children’s thermal comfort and provide valuable information for designers and governments to develop effective strategies for the indoor thermal environment considering the perspective of children.

Keywords: infant and children, thermal environment, thermal model, operative temperature.

Procedia PDF Downloads 104
18205 Impact of Simulated Brain Interstitial Fluid Flow on the Chemokine CXC-Chemokine-Ligand-12 Release From an Alginate-Based Hydrogel

Authors: Wiam El Kheir, Anais Dumais, Maude Beaudoin, Bernard Marcos, Nick Virgilio, Benoit Paquette, Nathalie Faucheux, Marc-Antoine Lauzon

Abstract:

The high infiltrative pattern of glioblastoma multiforme cells (GBM) is the main cause responsible for the actual standard treatments failure. The tumor high heterogeneity, the interstitial fluid flow (IFF) and chemokines guides GBM cells migration in the brain parenchyma resulting in tumor recurrence. Drug delivery systems emerged as an alternative approach to develop effective treatments for the disease. Some recent studies have proposed to harness the effect CXC-lchemokine-ligand-12 to direct and control the cancer cell migration through delivery system. However, the dynamics of the brain environment on the delivery system remains poorly understood. Nanoparticles (NPs) and hydrogels are known as good carriers for the encapsulation of different agents and control their release. We studied the release of CXCL12 (free or loaded into NPs) from an alginate-based hydrogel under static and indirect perfusion (IP) conditions. Under static conditions, the main phenomena driving CXCL12 release from the hydrogel was diffusion with the presence of strong interactions between the positively charged CXCL12 and the negatively charge alginate. CXCL12 release profiles were independent from the initial mass loadings. Afterwards, we demonstrated that the release could tuned by loading CXCL12 into Alginate/Chitosan-Nanoparticles (Alg/Chit-NPs) and embedded them into alginate-hydrogel. The initial burst release was substantially attenuated and the overall cumulative release percentages of 21%, 16% and 7% were observed for initial mass loadings of 0.07, 0.13 and 0.26 µg, respectively, suggesting stronger electrostatic interactions. Results were mathematically modeled based on Fick’s second law of diffusion framework developed previously to estimate the effective diffusion coefficient (Deff) and the mass transfer coefficient. Embedding the CXCL12 into NPs decreased the Deff an order of magnitude, which was coherent with experimental data. Thereafter, we developed an in-vitro 3D model that takes into consideration the convective contribution of the brain IFF to study CXCL12 release in an in-vitro microenvironment that mimics as faithfully as possible the human brain. From is unique design, the model also allowed us to understand the effect of IP on CXCL12 release in respect to time and space. Four flow rates (0.5, 3, 6.5 and 10 µL/min) which may increase CXCL12 release in-vivo depending on the tumor location were assessed. Under IP, cumulative percentages varying between 4.5-7.3%, 23-58.5%, 77.8-92.5% and 89.2-95.9% were released for the three initial mass loadings of 0.08, 0.16 and 0.33 µg, respectively. As the flow rate increase, IP culture conditions resulted in a higher release of CXCL12 compared to static conditions as the convection contribution became the main driving mass transport phenomena. Further, depending on the flow rate, IP had a direct impact on CXCL12 distribution within the simulated brain tissue, which illustrates the importance of developing such 3D in-vitro models to assess the efficiency of a delivery system targeting the brain. In future work, using this very model, we aim to understand the impact of the different phenomenon occurring on GBM cell behaviors in response to the resulting chemokine gradient subjected to various flow while allowing them to express their invasive characteristics in an in-vitro microenvironment that mimics the in-vivo brain parenchyma.

Keywords: 3D culture system, chemokines gradient, glioblastoma multiforme, kinetic release, mathematical modeling

Procedia PDF Downloads 69
18204 Goals, Rights and Obligations, and Moral Order: An Evaluation Approach to Chinese-Kenyan Relating Experience

Authors: Zhaohui Tian

Abstract:

China’s growing and deepening engagement in Africa has attracted numerous controversial debates on Chinese-African social-racial relations both in the media and academia. Most research tends to discuss this issue and the tensions involved at the state level, but limited attention has been given to the individual relating processes of those two racial groups from an intercultural politeness evaluation angle. Thus, taking Kenya as a country focus and putting it under recent perspectives on pragmatics and politeness, this study explores the Chinese-Kenyan workplace relating experience in Chinese-owned companies with the aim to offer new insights on Chinese-African social-racial tensions. The original data were collected through 25 interviews from 29 Chinese and Kenyan participants working in different Chinese companies and industries, some of which had been later on converted into 182 short story data in order to better capture the process and content dimensions of their experiences using Spencer &Kádár’s politeness evaluation model. Both interview and story data were analysed in MAXQDA to understand the personal relating process and the criteria they were drawing from when making evaluative judgements of their relations. The result particular draws attention to tensions around goals, rights, and obligations, and social-moral dimensions that had been underrepresented in intercultural and pragmatics literature. The study offers alternative empirical insights into Chinese-Kenyan relations from an intercultural politeness management perspective and the possible mismatches of the evaluative criteria that potentially cause tension in this context.

Keywords: chinese-kenyan, evaluation, relating, workplace

Procedia PDF Downloads 87
18203 Effects of Different Meteorological Variables on Reference Evapotranspiration Modeling: Application of Principal Component Analysis

Authors: Akinola Ikudayisi, Josiah Adeyemo

Abstract:

The correct estimation of reference evapotranspiration (ETₒ) is required for effective irrigation water resources planning and management. However, there are some variables that must be considered while estimating and modeling ETₒ. This study therefore determines the multivariate analysis of correlated variables involved in the estimation and modeling of ETₒ at Vaalharts irrigation scheme (VIS) in South Africa using Principal Component Analysis (PCA) technique. Weather and meteorological data between 1994 and 2014 were obtained both from South African Weather Service (SAWS) and Agricultural Research Council (ARC) in South Africa for this study. Average monthly data of minimum and maximum temperature (°C), rainfall (mm), relative humidity (%), and wind speed (m/s) were the inputs to the PCA-based model, while ETₒ is the output. PCA technique was adopted to extract the most important information from the dataset and also to analyze the relationship between the five variables and ETₒ. This is to determine the most significant variables affecting ETₒ estimation at VIS. From the model performances, two principal components with a variance of 82.7% were retained after the eigenvector extraction. The results of the two principal components were compared and the model output shows that minimum temperature, maximum temperature and windspeed are the most important variables in ETₒ estimation and modeling at VIS. In order words, ETₒ increases with temperature and windspeed. Other variables such as rainfall and relative humidity are less important and cannot be used to provide enough information about ETₒ estimation at VIS. The outcome of this study has helped to reduce input variable dimensionality from five to the three most significant variables in ETₒ modelling at VIS, South Africa.

Keywords: irrigation, principal component analysis, reference evapotranspiration, Vaalharts

Procedia PDF Downloads 237
18202 Corpus Stylistics and Multidimensional Analysis for English for Specific Purposes Teaching and Assessment

Authors: Svetlana Strinyuk, Viacheslav Lanin

Abstract:

Academic English has become lingua franca for international scientific community which stimulates universities to introduce English for Specific Purposes (EAP) courses into curriculum. Teaching L2 EAP students might be fulfilled with corpus technologies and digital stylistics. A special software developed to reach the manifold task of teaching, assessing and researching academic writing of L2 students on basis of digital stylistics and multidimensional analysis was created. A set of annotations (style markers) – grammar, lexical and syntactic features most significant of academic writing was built. Contrastive comparison of two corpora “model corpus”, subject domain limited papers published by competent writers in leading academic journals, and “students’ corpus”, subject domain limited papers written by last year students allows to receive data about the features of academic writing underused or overused by L2 EAP student. Both corpora are tagged with a special software created in GATE Developer. Style markers within the framework of research might be replaced depending on the relevance and validity of the result which is achieved from research corpora. Thus, selecting relevant (high frequency) style markers and excluding less relevant, i.e. less frequent annotations, high validity of the model is achieved. Software allows to compare the data received from processing model corpus to students’ corpus and get reports which can be used in teaching and assessment. The less deviation from the model corpus students demonstrates in their writing the higher is academic writing skill acquisition. The research showed that several style markers (hedging devices) were underused by L2 EAP students whereas lexical linking devices were used excessively. A special software implemented into teaching of EAP courses serves as a successful visual aid, makes assessment more valid; it is indicative of the degree of writing skill acquisition, and provides data for further research.

Keywords: corpus technologies in EAP teaching, multidimensional analysis, GATE Developer, corpus stylistics

Procedia PDF Downloads 178
18201 Information Exchange Process Analysis between Authoring Design Tools and Lighting Simulation Tools

Authors: Rudan Xue, Annika Moscati, Rehel Zeleke Kebede, Peter Johansson

Abstract:

Successful buildings’ simulation and analysis inevitably require information exchange between multiple building information modeling (BIM) software. The BIM infor-mation exchange based on IFC is widely used. However, Industry Foundation Classifi-cation (IFC) files are not always reliable and information can get lost when using dif-ferent software for modeling and simulations. In this research, interviews with lighting simulation experts and a case study provided by a company producing lighting devices have been the research methods used to identify the necessary steps and data for suc-cessful information exchange between lighting simulation tools and authoring design tools. Model creation, information exchange, and model simulation have been identi-fied as key aspects for the success of information exchange. The paper concludes with recommendations for improved information exchange and more reliable simulations that take all the needed parameters into consideration.

Keywords: BIM, data exchange, interoperability issues, lighting simulations

Procedia PDF Downloads 220
18200 The Robotic Factor in Left Atrial Myxoma

Authors: Abraham J. Rizkalla, Tristan D. Yan

Abstract:

Atrial myxoma is the most common primary cardiac tumor, and can result in cardiac failure secondary to obstruction, or systemic embolism due to fragmentation. Traditionally, excision of atrial an myxoma has been performed through median sternotomy, however the robotic approach offers several advantages including less pain, improved cosmesis, and faster recovery. Here, we highlight the less well recognized advantages and technical aspects to robotic myxoma resection. This video-presentation demonstrates the resection of a papillary subtype left atrial myxoma using the DaVinci© Xi surgical robot. The 10x magnification and 3D vision allows for the interface between the tumor and the interatrial septum to be accurately dissected, without the need to patch the interatrial septum. Several techniques to avoid tumor fragmentation and embolization are demonstrated throughout the procedure. The tumor was completely excised with clear margins. There was no atrial septal defect or mitral valve injury on post operative transesophageal echocardiography. The patient was discharged home on the fourth post-operative day. This video-presentation highlights the advantages of the robotic approach in atrial myxoma resection compared with sternotomy, as well as emphasizing several technical considerations to avoid potential complications.

Keywords: cardiac surgery, left atrial myxoma, cardiac tumour, robotic resection

Procedia PDF Downloads 56
18199 A Physiological Approach for Early Detection of Hemorrhage

Authors: Rabie Fadil, Parshuram Aarotale, Shubha Majumder, Bijay Guargain

Abstract:

Hemorrhage is the loss of blood from the circulatory system and leading cause of battlefield and postpartum related deaths. Early detection of hemorrhage remains the most effective strategy to reduce mortality rate caused by traumatic injuries. In this study, we investigated the physiological changes via non-invasive cardiac signals at rest and under different hemorrhage conditions simulated through graded lower-body negative pressure (LBNP). Simultaneous electrocardiogram (ECG), photoplethysmogram (PPG), blood pressure (BP), impedance cardiogram (ICG), and phonocardiogram (PCG) were acquired from 10 participants (age:28 ± 6 year, weight:73 ± 11 kg, height:172 ± 8 cm). The LBNP protocol consisted of applying -20, -30, -40, -50, and -60 mmHg pressure to the lower half of the body. Beat-to-beat heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP), and mean aerial pressure (MAP) were extracted from ECG and blood pressure. Systolic amplitude (SA), systolic time (ST), diastolic time (DT), and left ventricle Ejection time (LVET) were extracted from PPG during each stage. Preliminary results showed that the application of -40 mmHg i.e. moderate stage simulated hemorrhage resulted significant changes in HR (85±4 bpm vs 68 ± 5bpm, p < 0.01), ST (191 ± 10 ms vs 253 ± 31 ms, p < 0.05), LVET (350 ± 14 ms vs 479 ± 47 ms, p < 0.05) and DT (551 ± 22 ms vs 683 ± 59 ms, p < 0.05) compared to rest, while no change was observed in SA (p > 0.05) as a consequence of LBNP application. These findings demonstrated the potential of cardiac signals in detecting moderate hemorrhage. In future, we will analyze all the LBNP stages and investigate the feasibility of other physiological signals to develop a predictive machine learning model for early detection of hemorrhage.

Keywords: blood pressure, hemorrhage, lower-body negative pressure, LBNP, machine learning

Procedia PDF Downloads 153
18198 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 282
18197 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data

Authors: Kai Warsoenke, Maik Mackiewicz

Abstract:

To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.

Keywords: automotive production, machine learning, process optimization, smart tolerancing

Procedia PDF Downloads 98
18196 Causal-Explanatory Model of Academic Performance in Social Anxious Adolescents

Authors: Beatriz Delgado

Abstract:

Although social anxiety is one of the most prevalent disorders in adolescents and causes considerable difficulties and social distress in those with the disorder, to date very few studies have explored the impact of social anxiety on academic adjustment in student populations. The aim of this study was analyze the effect of social anxiety on school functioning in Secondary Education. Specifically, we examined the relationship between social anxiety and self-concept, academic goals, causal attributions, intellectual aptitudes, and learning strategies, personality traits, and academic performance, with the purpose of creating a causal-explanatory model of academic performance. The sample consisted of 2,022 students in the seven to ten grades of Compulsory Secondary Education in Spain (M = 13.18; SD = 1.35; 51.1% boys). We found that: (a) social anxiety has a direct positive effect on internal attributional style, and a direct negative effect on self-concept. Social anxiety also has an indirect negative effect on internal causal attributions; (b) prior performance (first academic trimester) exerts a direct positive effect on intelligence, achievement goals, academic self-concept, and final academic performance (third academic trimester), and a direct negative effect on internal causal attributions. It also has an indirect positive effect on causal attributions (internal and external), learning goals, achievement goals, and study strategies; (c) intelligence has a direct positive effect on learning goals and academic performance (third academic trimester); (d) academic self-concept has a direct positive effect on internal and external attributional style. Also, has an indirect effect on learning goals, achievement goals, and learning strategies; (e) internal attributional style has a direct positive effect on learning strategies and learning goals. Has a positive but indirect effect on achievement goals and learning strategies; (f) external attributional style has a direct negative effect on learning strategies and learning goals and a direct positive effect on internal causal attributions; (g) learning goals have direct positive effect on learning strategies and achievement goals. The structural equation model fit the data well (CFI = .91; RMSEA = .04), explaining 93.8% of the variance in academic performance. Finally, we emphasize that the new causal-explanatory model proposed in the present study represents a significant contribution in that it includes social anxiety as an explanatory variable of cognitive-motivational constructs.

Keywords: academic performance, adolescence, cognitive-motivational variables, social anxiety

Procedia PDF Downloads 315
18195 Numerical Modelling of Dry Stone Masonry Structures Based on Finite-Discrete Element Method

Authors: Ž. Nikolić, H. Smoljanović, N. Živaljić

Abstract:

This paper presents numerical model based on finite-discrete element method for analysis of the structural response of dry stone masonry structures under static and dynamic loads. More precisely, each discrete stone block is discretized by finite elements. Material non-linearity including fracture and fragmentation of discrete elements as well as cyclic behavior during dynamic load are considered through contact elements which are implemented within a finite element mesh. The application of the model was conducted on several examples of these structures. The performed analysis shows high accuracy of the numerical results in comparison with the experimental ones and demonstrates the potential of the finite-discrete element method for modelling of the response of dry stone masonry structures.

Keywords: dry stone masonry structures, dynamic load, finite-discrete element method, static load

Procedia PDF Downloads 395
18194 Direct Displacement-Based Design Procedure for Performance-Based Seismic Design of Structures

Authors: Haleh Hamidpour

Abstract:

Since the seismic damageability of structures is controlled by the inelastic deformation capacities of structural elements, seismic design of structure based on force analogy methods is not appropriate. In recent year, the basic approach of design codes have been changed from force-based approach to displacement-based. In this regard, a Direct Displacement-Based Design (DDBD) and a Performance-Based Plastic Design (PBPD) method are proposed. In this study, the efficiency of these two methods on seismic performance of structures is evaluated through a sample 12-story reinforced concrete moment frame. The building is designed separately based on the DDBD and the PBPD methods. Once again the structure is designed by the traditional force analogy method according to the FEMA P695 regulation. Different design method results in different structural elements. Seismic performance of these three structures is evaluated through nonlinear static and nonlinear dynamic analysis. The results show that the displacement-based design methods accommodate the intended performance objectives better than the traditional force analogy method.

Keywords: direct performance-based design, ductility demands, inelastic seismic performance, yield mechanism

Procedia PDF Downloads 316
18193 Association of Phosphorus and Magnesium with Fat Indices in Children with Metabolic Syndrome

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Metabolic syndrome (MetS) is a disease associated with obesity. It is a complicated clinical problem possibly affecting body composition as well as macrominerals. These parameters gain further attention, particularly in the pediatric population. The aim of this study is to investigate the amount of discrete body composition fractions in groups that differ in the severity of obesity. Also, the possible associations with calcium (Ca), phosphorus (P), magnesium (Mg) will be examined. The study population was divided into four groups. Twenty-eight, 29, 34, and 34 children were involved in Group 1 (healthy), 2 (obese), 3 (morbid obese), and 4 (MetS), respectively. Institutional Ethical Committee approved the study protocol. Informed consent forms were obtained from the participants. The classification of obese groups was performed based upon the recommendations of the World Health Organization. Metabolic syndrome components were defined. Serum Ca, P, Mg concentrations were measured. Within the scope of body composition, fat mass, fat-free mass, protein mass, mineral mass were determined by a body composition monitor using bioelectrical impedance analysis technology. Weight, height, waist circumference, hip circumference, head circumference, and neck circumference values were recorded. Body mass index, diagnostic obesity notation model assessment index, fat mass index, and fat-free mass index values were calculated. Data were statistically evaluated and interpreted. There was no statistically significant difference among the groups in terms of Ca and P concentrations. Magnesium concentrations differed between Group 1 and Group 4. Strong negative correlations were detected between P as well as Mg and fat mass index as well as diagnostic obesity notation model assessment index in Group 4, the group, which comprised morbid obese children with MetS. This study emphasized unique associations of P and Mg minerals with diagnostic obesity notation model assessment index and fat mass index during the evaluation of morbid obese children with MetS. It was also concluded that diagnostic obesity notation model assessment index and fat mass index were more proper indices in comparison with body mass index and fat-free mass index for the purpose of defining body composition in children.

Keywords: children, fat mass, fat-free mass, macrominerals, obesity

Procedia PDF Downloads 136
18192 Unsupervised Part-of-Speech Tagging for Amharic Using K-Means Clustering

Authors: Zelalem Fantahun

Abstract:

Part-of-speech tagging is the process of assigning a part-of-speech or other lexical class marker to each word into naturally occurring text. Part-of-speech tagging is the most fundamental and basic task almost in all natural language processing. In natural language processing, the problem of providing large amount of manually annotated data is a knowledge acquisition bottleneck. Since, Amharic is one of under-resourced language, the availability of tagged corpus is the bottleneck problem for natural language processing especially for POS tagging. A promising direction to tackle this problem is to provide a system that does not require manually tagged data. In unsupervised learning, the learner is not provided with classifications. Unsupervised algorithms seek out similarity between pieces of data in order to determine whether they can be characterized as forming a group. This paper explicates the development of unsupervised part-of-speech tagger using K-Means clustering for Amharic language since large amount of data is produced in day-to-day activities. In the development of the tagger, the following procedures are followed. First, the unlabeled data (raw text) is divided into 10 folds and tokenization phase takes place; at this level, the raw text is chunked at sentence level and then into words. The second phase is feature extraction which includes word frequency, syntactic and morphological features of a word. The third phase is clustering. Among different clustering algorithms, K-means is selected and implemented in this study that brings group of similar words together. The fourth phase is mapping, which deals with looking at each cluster carefully and the most common tag is assigned to a group. This study finds out two features that are capable of distinguishing one part-of-speech from others these are morphological feature and positional information and show that it is possible to use unsupervised learning for Amharic POS tagging. In order to increase performance of the unsupervised part-of-speech tagger, there is a need to incorporate other features that are not included in this study, such as semantic related information. Finally, based on experimental result, the performance of the system achieves a maximum of 81% accuracy.

Keywords: POS tagging, Amharic, unsupervised learning, k-means

Procedia PDF Downloads 427
18191 Segmentation of Arabic Handwritten Numeral Strings Based on Watershed Approach

Authors: Nidal F. Shilbayeh, Remah W. Al-Khatib, Sameer A. Nooh

Abstract:

Arabic offline handwriting recognition systems are considered as one of the most challenging topics. Arabic Handwritten Numeral Strings are used to automate systems that deal with numbers such as postal code, banking account numbers and numbers on car plates. Segmentation of connected numerals is the main bottleneck in the handwritten numeral recognition system.  This is in turn can increase the speed and efficiency of the recognition system. In this paper, we proposed algorithms for automatic segmentation and feature extraction of Arabic handwritten numeral strings based on Watershed approach. The algorithms have been designed and implemented to achieve the main goal of segmenting and extracting the string of numeral digits written by hand especially in a courtesy amount of bank checks. The segmentation algorithm partitions the string into multiple regions that can be associated with the properties of one or more criteria. The numeral extraction algorithm extracts the numeral string digits into separated individual digit. Both algorithms for segmentation and feature extraction have been tested successfully and efficiently for all types of numerals.

Keywords: handwritten numerals, segmentation, courtesy amount, feature extraction, numeral recognition

Procedia PDF Downloads 370
18190 An Analysis of Uncoupled Designs in Chicken Egg

Authors: Pratap Sriram Sundar, Chandan Chowdhury, Sagar Kamarthi

Abstract:

Nature has perfected her designs over 3.5 billion years of evolution. Research fields such as biomimicry, biomimetics, bionics, bio-inspired computing, and nature-inspired designs have explored nature-made artifacts and systems to understand nature’s mechanisms and intelligence. Learning from nature, the researchers have generated sustainable designs and innovation in a variety of fields such as energy, architecture, agriculture, transportation, communication, and medicine. Axiomatic design offers a method to judge if a design is good. This paper analyzes design aspects of one of the nature’s amazing object: chicken egg. The functional requirements (FRs) of components of the object are tabulated and mapped on to nature-chosen design parameters (DPs). The ‘independence axiom’ of the axiomatic design methodology is applied to analyze couplings and to evaluate if eggs’ design is good (i.e., uncoupled design) or bad (i.e., coupled design). The analysis revealed that eggs design is a good design, i.e., uncoupled design. This approach can be applied to any nature’s artifacts to judge whether their design is a good or a bad. This methodology is valuable for biomimicry studies. This approach can also be a very useful teaching design consideration of biology and bio-inspired innovation.

Keywords: uncoupled design, axiomatic design, nature design, design evaluation

Procedia PDF Downloads 157
18189 Molding Properties of Cobalt-Chrome-Based Feedstocks Used in Low-Pressure Powder Injection Molding

Authors: Ehsan Gholami, Vincent Demers

Abstract:

Low-pressure powder injection molding is an emerging technology for cost-effectively producing complex shape metallic parts with the proper dimensional tolerances, either in high or in low production volumes. In this study, the molding properties of cobalt-chrome-based feedstocks were evaluated for use in a low-pressure powder injection molding process. The rheological properties of feedstock formulations were obtained by mixing metallic powder with a proprietary wax-based binder system. Rheological parameters such as reference viscosity, shear rate sensitivity index, and activation energy for viscous flow, were extracted from the viscosity profiles and introduced into the Weir model to calculate the moldability index. Feedstocks were experimentally injected into a spiral mold cavity to validate the injection performance calculated with the model.

Keywords: binder, feedstock, moldability, powder injection molding, viscosity

Procedia PDF Downloads 263
18188 Air-Coupled Ultrasonic Testing for Non-Destructive Evaluation of Various Aerospace Composite Materials by Laser Vibrometry

Authors: J. Vyas, R. Kazys, J. Sestoke

Abstract:

Air-coupled ultrasonic is the contactless ultrasonic measurement approach which has become widespread for material characterization in Aerospace industry. It is always essential for the requirement of lightest weight, without compromising the durability. To archive the requirements, composite materials are widely used. This paper yields analysis of the air-coupled ultrasonics for composite materials such as CFRP (Carbon Fibre Reinforced Polymer) and GLARE (Glass Fiber Metal Laminate) and honeycombs for the design of modern aircrafts. Laser vibrometry could be the key source of characterization for the aerospace components. The air-coupled ultrasonics fundamentals, including principles, working modes and transducer arrangements used for this purpose is also recounted in brief. The emphasis of this paper is to approach the developed NDT techniques based on the ultrasonic guided waves applications and the possibilities of use of laser vibrometry in different materials with non-contact measurement of guided waves. 3D assessment technique which employs the single point laser head using, automatic scanning relocation of the material to assess the mechanical displacement including pros and cons of the composite materials for aerospace applications with defects and delaminations.

Keywords: air-coupled ultrasonics, contactless measurement, laser interferometry, NDT, ultrasonic guided waves

Procedia PDF Downloads 225
18187 Residential and Care Model for Elderly People Based on “Internet Plus”

Authors: Haoyi Sheng

Abstract:

China's aging tendency is becoming increasingly severe, which leads to the embarrassing situation of "getting old before getting wealthy". The traditional pension model does not comply with the need of today. Relying on "Internet Plus", it can efficiently integrate information and resources and meet the personalized needs of elderly care. It can reduce the operating cost of community elderly care facilities and lay a technical foundation for providing better services for the elderly. The key for providing help for the elderly in the future is to effectively integrate technology, make good use of technology, and improve the efficiency of elderly care services. The effective integration of traditional home care, community care, intelligent elderly care equipment and medical resources to create the "Internet Plus" community intelligent pension service mode has become the future development trend of aging care. The research method of this paper is to collect literature and conduct theoretical research on community pension firstly. Secondly, the combination of suitable aging design and "Internet Plus" is elaborated through research. Finally, this paper states the current level of intelligent technology in old-age care and looks into the future by understanding multiple levels of "Internet Plus". The development of community intelligent pension mode and content under "Internet Plus" has enormous development potential. In addition to the characteristics and functions of ordinary houses, residential design of endowment housing has higher requirements for comfort and personalization, and the people-oriented is the principle of design.

Keywords: ageing tendency, 'Internet Plus', community intelligent elderly care, elderly care service model, technology

Procedia PDF Downloads 122
18186 Smart Transportation: Bringing Back Sunshine City Harare

Authors: R. Shayamapiki

Abstract:

This study explores the applicability of applying new urbanism principles in cities of developing countries as a panacea towards building sustainable cities through implementing smart transportation. Smart transportation approach to planning has been growing remarkably around the globe in the past decade. In conquest to curb traffic congestion and reducing automobile dependency in the inner-city Harare, Smart Transportation has been a strong drive towards building sustainable cities. Conceptually, Smart Transportation constitutes of principles which include walking, cycling and mass transit. The Smart Transportation approach has been a success story in the cities of developing world but its application in the cities of developing countries has been doubtful. Cities of developing countries being multifaceted with several urban sustainability challenges, the study consolidates that there are no robust policy, legislative and institutional frameworks to govern the application of Smart Transportation in urban planning hence no clear roadway towards its success story. Questions regarding this investigation proliferate to; how capable are cities of developing countries to transform Smart Transportation principles to a success story? What victory can Smart Transportation bring to sustainable urban development? What are constraints of embracing the principles and how can they be manipulated? Methodologically the case study of urban syntax in Harare Central Business District and arterial roads of the city, legislation and institutional settings underpins various research outcomes. The study finds out the hindrances of policy, legislative and institutional incapacities cooked with economic constraints, lack of political will and technically inflexible zoning regulations. The study also elucidates that there is need to adopt a localized approach to Smart Transportation. The paper then calls for strengthening of institutional and legal reform in conquest to embrace the concept, policy and legislative support, feasible financial mechanism, coordination of responsible stakeholders, planning standards and regulatory frameworks reform to celebrate the success story of Smart Transportation in the developing world.

Keywords: inner-city Harare, new urbanism, smart transportation, sustainable cities

Procedia PDF Downloads 461
18185 Job Satisfaction and Career Choices: A Study Using Schein´s Career Anchor Model

Authors: Rosana Silvina Codaro, Patricia Amelia Tomei

Abstract:

This study explores the relationship between job satisfaction and alignment between the individual´s current occupation and his talents, needs and values, namely his 'career anchors'. With this purpose in mind, a quantitative survey was performed for a non- graduate probabilistic sample of management business students of a private university in Rio de Janeiro. The results of the survey showed there is no significant association between satisfaction at work and alignment with the individual’s career anchor. The most frequent career anchor found for both genders was lifestyle, showing a trend towards finding a career that allows some balance between professional and personal life. The study also showed that self-employed individuals are more satisfied with their work than the individuals employed by a company are, and men are more satisfied at work than women are, Individuals aligned and not satisfied tend to be the ones who have fewer years of work experience and individuals not aligned and satisfied tend to be older.

Keywords: careers, career anchors, job satisfaction, Schein´s career anchor model

Procedia PDF Downloads 353
18184 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 75
18183 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 141
18182 Osteogenesis in Thermo-Sensitive Hydrogel Using Mesenchymal Stem Cell Derived from Human Turbinate

Authors: A. Reum Son, Jin Seon Kwon, Seung Hun Park, Hai Bang Lee, Moon Suk Kim

Abstract:

These days, stem cell therapy is focused on for promising source of treatment in clinical human disease. As a supporter of stem cells, in situ-forming hydrogels with growth factors and cells appear to be a promising approach in tissue engineering. To examine osteogenic differentiation of hTMSCs which is one of mesenchymal stem cells in vivo in an injectable hydrogel, we use a methoxy polyethylene glycol-polycaprolactone blockcopolymer (MPEG-PCL) solution with osteogenic factors. We synthesized MPEG-PCL hydrogel and measured viscosity to check sol-gel transition. In order to demonstrate osteogenic ability of hTMSCs, we conducted in vitro osteogenesis experiment. Then, to confirm the cell cytotoxicity, we performed WST-1 with hTMSCs and MPEG-PCL. As the result of in vitro experiment, we implanted cell and hydrogel mixture into animal model and checked degree of osteogenesis with histological analysis and amount of expression genes. Through these experimental data, MPEG-PCL hydrogel has sol-gel transition in temperature change and is biocompatible with stem cells. In histological analysis and gene expression, hTMSCs are very good source of osteogenesis with hydrogel and will use it to tissue engineering as important treatment method. hTMSCs could be a good adult stem cell source for usability of isolation and high proliferation. When hTMSCs are used as cell therapy method with in situ-formed hydrogel, they may provide various benefits like a noninvasive alternative for bone tissue engineering applications.

Keywords: injectable hydrogel, stem cell, osteogenic differentiation, tissue engineering

Procedia PDF Downloads 434
18181 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encryption

Authors: Waziri Victor Onomza, John K. Alhassan, Idris Ismaila, Noel Dogonyaro Moses

Abstract:

This paper describes the problem of building secure computational services for encrypted information in the Cloud Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy, confidentiality, availability of the users. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute theoretical presentations in high-level computational processes that are based on number theory and algebra that can easily be integrated and leveraged in the Cloud computing with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based cryptographic security algorithm.

Keywords: big data analytics, security, privacy, bootstrapping, homomorphic, homomorphic encryption scheme

Procedia PDF Downloads 366