Search results for: domain
519 Inter-Annual Variations of Sea Surface Temperature in the Arabian Sea
Authors: K. S. Sreejith, C. Shaji
Abstract:
Though both Arabian Sea and its counterpart Bay of Bengal is forced primarily by the semi-annually reversing monsoons, the spatio-temporal variations of surface waters is very strong in the Arabian Sea as compared to the Bay of Bengal. This study focuses on the inter-annual variability of Sea Surface Temperature (SST) in the Arabian Sea by analysing ERSST dataset which covers 152 years of SST (January 1854 to December 2002) based on the ICOADS in situ observations. To capture the dominant SST oscillations and to understand the inter-annual SST variations at various local regions of the Arabian Sea, wavelet analysis was performed on this long time-series SST dataset. This tool is advantageous over other signal analysing tools like Fourier analysis, based on the fact that it unfolds a time-series data (signal) both in frequency and time domain. This technique makes it easier to determine dominant modes of variability and explain how those modes vary in time. The analysis revealed that pentadal SST oscillations predominate at most of the analysed local regions in the Arabian Sea. From the time information of wavelet analysis, it was interpreted that these cold and warm events of large amplitude occurred during the periods 1870-1890, 1890-1910, 1930-1950, 1980-1990 and 1990-2005. SST oscillations with peaks having period of ~ 2-4 years was found to be significant in the central and eastern regions of Arabian Sea. This indicates that the inter-annual SST variation in the Indian Ocean is affected by the El Niño-Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) events.Keywords: Arabian Sea, ICOADS, inter-annual variation, pentadal oscillation, SST, wavelet analysis
Procedia PDF Downloads 276518 Genome-Wide Identification and Characterization of MLO Family Genes in Pumpkin (Cucurbita maxima Duch.)
Authors: Khin Thanda Win, Chunying Zhang, Sanghyeob Lee
Abstract:
Mildew resistance locus o (Mlo), a plant-specific gene family with seven-transmembrane (TM), plays an important role in plant resistance to powdery mildew (PM). PM caused by Podosphaera xanthii is a widespread plant disease and probably represents the major fungal threat for many Cucurbits. The recent Cucurbita maxima genome sequence data provides an opportunity to identify and characterize the MLO gene family in this species. Total twenty genes (designated CmaMLO1 through CmaMLO20) have been identified by using an in silico cloning method with the MLO gene sequences of Cucumis sativus, Cucumis melo, Citrullus lanatus and Cucurbita pepo as probes. These CmaMLOs were evenly distributed on 15 chromosomes of 20 C. maxima chromosomes without any obvious clustering. Multiple sequence alignment showed that the common structural features of MLO gene family, such as TM domains, a calmodulin-binding domain and 30 important amino acid residues for MLO function, were well conserved. Phylogenetic analysis of the CmaMLO genes and other plant species reveals seven different clades (I through VII) and only clade IV is specific to monocots (rice, barley, and wheat). Phylogenetic and structural analyses provided preliminary evidence that five genes belonged to clade V could be the susceptibility genes which may play the importance role in PM resistance. This study is the first comprehensive report on MLO genes in C. maxima to our knowledge. These findings will facilitate the functional analysis of the MLOs related to PM susceptibility and are valuable resources for the development of disease resistance in pumpkin.Keywords: Mildew resistance locus o (Mlo), powdery mildew, phylogenetic relationship, susceptibility genes
Procedia PDF Downloads 181517 Multiscale Hub: An Open-Source Framework for Practical Atomistic-To-Continuum Coupling
Authors: Masoud Safdari, Jacob Fish
Abstract:
Despite vast amount of existing theoretical knowledge, the implementation of a universal multiscale modeling, analysis, and simulation software framework remains challenging. Existing multiscale software and solutions are often domain-specific, closed-source and mandate a high-level of experience and skills in both multiscale analysis and programming. Furthermore, tools currently existing for Atomistic-to-Continuum (AtC) multiscaling are developed with the assumptions such as accessibility of high-performance computing facilities to the users. These issues mentioned plus many other challenges have reduced the adoption of multiscale in academia and especially industry. In the current work, we introduce Multiscale Hub (MsHub), an effort towards making AtC more accessible through cloud services. As a joint effort between academia and industry, MsHub provides a universal web-enabled framework for practical multiscaling. Developed on top of universally acclaimed scientific programming language Python, the package currently provides an open-source, comprehensive, easy-to-use framework for AtC coupling. MsHub offers an easy to use interface to prominent molecular dynamics and multiphysics continuum mechanics packages such as LAMMPS and MFEM (a free, lightweight, scalable C++ library for finite element methods). In this work, we first report on the design philosophy of MsHub, challenges identified and issues faced regarding its implementation. MsHub takes the advantage of a comprehensive set of tools and algorithms developed for AtC that can be used for a variety of governing physics. We then briefly report key AtC algorithms implemented in MsHub. Finally, we conclude with a few examples illustrating the capabilities of the package and its future directions.Keywords: atomistic, continuum, coupling, multiscale
Procedia PDF Downloads 177516 Implementation of Free-Field Boundary Condition for 2D Site Response Analysis in OpenSees
Authors: M. Eskandarighadi, C. R. McGann
Abstract:
It is observed from past experiences of earthquakes that local site conditions can significantly affect the strong ground motion characteristics experience at the site. One-dimensional seismic site response analysis is the most common approach for investigating site response. This approach assumes that soil is homogeneous and infinitely extended in the horizontal direction. Therefore, tying side boundaries together is one way to model this behavior, as the wave passage is assumed to be only vertical. However, 1D analysis cannot capture the 2D nature of wave propagation, soil heterogeneity, and 2D soil profile with features such as inclined layer boundaries. In contrast, 2D seismic site response modeling can consider all of the mentioned factors to better understand local site effects on strong ground motions. 2D wave propagation and considering that the soil profile on the two sides of the model may not be identical clarifies the importance of a boundary condition on each side that can minimize the unwanted reflections from the edges of the model and input appropriate loading conditions. Ideally, the model size should be sufficiently large to minimize the wave reflection, however, due to computational limitations, increasing the model size is impractical in some cases. Another approach is to employ free-field boundary conditions that take into account the free-field motion that would exist far from the model domain and apply this to the sides of the model. This research focuses on implementing free-field boundary conditions in OpenSees for 2D site response analysisComparisons are made between 1D models and 2D models with various boundary conditions, and details and limitations of the developed free-field boundary modeling approach are discussed.Keywords: boundary condition, free-field, opensees, site response analysis, wave propagation
Procedia PDF Downloads 158515 Development of Medical Intelligent Process Model Using Ontology Based Technique
Authors: Emmanuel Chibuogu Asogwa, Tochukwu Sunday Belonwu
Abstract:
An urgent demand for creative solutions has been created by the rapid expansion of medical knowledge, the complexity of patient care, and the requirement for more precise decision-making. As a solution to this problem, the creation of a Medical Intelligent Process Model (MIPM) utilizing ontology-based appears as a promising way to overcome this obstacle and unleash the full potential of healthcare systems. The development of a Medical Intelligent Process Model (MIPM) using ontology-based techniques is motivated by a lack of quick access to relevant medical information and advanced tools for treatment planning and clinical decision-making, which ontology-based techniques can provide. The aim of this work is to develop a structured and knowledge-driven framework that leverages ontology, a formal representation of domain knowledge, to enhance various aspects of healthcare. Object-Oriented Analysis and Design Methodology (OOADM) were adopted in the design of the system as we desired to build a usable and evolvable application. For effective implementation of this work, we used the following materials/methods/tools: the medical dataset for the test of our model in this work was obtained from Kaggle. The ontology-based technique was used with Confusion Matrix, MySQL, Python, Hypertext Markup Language (HTML), Hypertext Preprocessor (PHP), Cascaded Style Sheet (CSS), JavaScript, Dreamweaver, and Fireworks. According to test results on the new system using Confusion Matrix, both the accuracy and overall effectiveness of the medical intelligent process significantly improved by 20% compared to the previous system. Therefore, using the model is recommended for healthcare professionals.Keywords: ontology-based, model, database, OOADM, healthcare
Procedia PDF Downloads 78514 Facile Fabrication of TiO₂NT/Fe₂O₃@Ag₂CO₃ Nanocomposite and Its Highly Efficient Visible Light Photocatalytic and Antibacterial Activity
Authors: Amal A. Al-Kahlawy, Heba H. El-Maghrabi
Abstract:
Due to the increasing need to environment protection in real time need to energize new materials are under extensive investigations. Between others, TiO2 nanotubes (TNTs) nanocomposite with iron oxide and silver carbonate, are promising alternatives as high-efficiency visible light photocatalyst due to their unique properties and their superior charge transport properties. Our efforts in this domain aim the construction of novel nanocomposite of TiO2NT/Fe2O3@Ag2CO3. The structure, surface morphology, chemical composition and optical properties were characterized by X-ray diffraction (XRD), Raman, Fourier-transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), energy dispersive X-ray spectrometer (EDS), transmission electron microscopy (TEM), selected area electron diffraction (SAED) and UV–vis diffuse reflectance spectroscopy (DRS). XRD results confirm the interaction of TiO2-NT with iron oxide. This novel nanocomposite shows remarkably enhanced performance for phenol compounds photodegradation. The experimental data shows a promising photocatalytic activity. In particular, a maximum value of 450 mg/g was removed within 60 min at solar light irradiation with degradation efficiency of 99.5%. The high photocatalytic activity of the nanocomposite is found to be related to the increased adsorption toward chemical species, enhanced light absorption and efficient charge separation and transfer. Finally, the designed TiO2NT/Fe2O3@Ag2CO3 nanocomposite has a great degree of sustainability and could has a potential application for the industrial treatment of wastewater containing toxic organic materials.Keywords: nanocomposite, photocatalyst, solar energy, titanium dioxide nanotubes
Procedia PDF Downloads 247513 Awareness among Medical Students and Faculty about Integration of Artifical Intelligence Literacy in Medical Curriculum
Authors: Fatima Faraz
Abstract:
BACKGROUND: While Artificial intelligence (AI) provides new opportunities across a wide variety of industries, healthcare is no exception. AI can lead to advancements in how the healthcare system functions and improves the quality of patient care. Developing countries like Pakistan are lagging in the implementation of AI-based solutions in healthcare. This demands increased knowledge and AI literacy among health care professionals. OBJECTIVES: To assess the level of awareness among medical students and faculty about AI in preparation for teaching AI basics and data science applications in clinical practice in an integrated medical curriculum. METHODS: An online 15-question semi-structured questionnaire, previously tested and validated, was delivered among participants through convenience sampling. The questionnaire composed of 3 parts: participant’s background knowledge, AI awareness, and attitudes toward AI applications in medicine. RESULTS: A total of 182 students and 39 faculty members from Rawalpindi Medical University, Pakistan, participated in the study. Only 26% of students and 46.2% of faculty members responded that they were aware of AI topics in clinical medicine. The major source of AI knowledge was social media (35.7%) for students and professional talks and colleagues (43.6%) for faculty members. 23.5% of participants answered that they personally had a basic understanding of AI. Students and faculty (60.1%) were interested in AI in patient care and teaching domain. These findings parallel similar published AI survey results. CONCLUSION: This survey concludes interest among students and faculty in AI developments and technology applications in healthcare. Further studies are required in order to correctly fit AI in the integrated modular curriculum of medical education.Keywords: medical education, data science, artificial intelligence, curriculum
Procedia PDF Downloads 101512 Exploring the Role of Building Information Modeling for Delivering Successful Construction Projects
Authors: Muhammad Abu Bakar Tariq
Abstract:
Construction industry plays a crucial role in the progress of societies and economies. Furthermore, construction projects have social as well as economic implications, thus, their success/failure have wider impacts. However, the industry is lagging behind in terms of efficiency and productivity. Building Information Modeling (BIM) is recognized as a revolutionary development in Architecture, Engineering and Construction (AEC) industry. There are numerous interest groups around the world providing definitions of BIM, proponents describing its advantages and opponents identifying challenges/barriers regarding adoption of BIM. This research is aimed at to determine what actually BIM is, along with its potential role in delivering successful construction projects. The methodology is critical analysis of secondary data sources i.e. information present in public domain, which include peer reviewed journal articles, industry and government reports, conference papers, books, case studies etc. It is discovered that clash detection and visualization are two major advantages of BIM. Clash detection option identifies clashes among structural, architectural and MEP designs before construction actually commences, which subsequently saves time as well as cost and ensures quality during execution phase of a project. Visualization is a powerful tool that facilitates in rapid decision-making in addition to communication and coordination among stakeholders throughout project’s life cycle. By eliminating inconsistencies that consume time besides cost during actual construction, improving collaboration among stakeholders throughout project’s life cycle, BIM can play a positive role to achieve efficiency and productivity that consequently deliver successful construction projects.Keywords: building information modeling, clash detection, construction project success, visualization
Procedia PDF Downloads 260511 Hand Symbol Recognition Using Canny Edge Algorithm and Convolutional Neural Network
Authors: Harshit Mittal, Neeraj Garg
Abstract:
Hand symbol recognition is a pivotal component in the domain of computer vision, with far-reaching applications spanning sign language interpretation, human-computer interaction, and accessibility. This research paper discusses the approach with the integration of the Canny Edge algorithm and convolutional neural network. The significance of this study lies in its potential to enhance communication and accessibility for individuals with hearing impairments or those engaged in gesture-based interactions with technology. In the experiment mentioned, the data is manually collected by the authors from the webcam using Python codes, to increase the dataset augmentation, is applied to original images, which makes the model more compatible and advanced. Further, the dataset of about 6000 coloured images distributed equally in 5 classes (i.e., 1, 2, 3, 4, 5) are pre-processed first to gray images and then by the Canny Edge algorithm with threshold 1 and 2 as 150 each. After successful data building, this data is trained on the Convolutional Neural Network model, giving accuracy: 0.97834, precision: 0.97841, recall: 0.9783, and F1 score: 0.97832. For user purposes, a block of codes is built in Python to enable a window for hand symbol recognition. This research, at its core, seeks to advance the field of computer vision by providing an advanced perspective on hand sign recognition. By leveraging the capabilities of the Canny Edge algorithm and convolutional neural network, this study contributes to the ongoing efforts to create more accurate, efficient, and accessible solutions for individuals with diverse communication needs.Keywords: hand symbol recognition, computer vision, Canny edge algorithm, convolutional neural network
Procedia PDF Downloads 65510 Possible Mechanism of DM2 Development in OSA Patients Mediated via Rev-Erb-Alpha and NPAS2 Proteins
Authors: Filip Franciszek Karuga, Szymon Turkiewicz, Marta Ditmer, Marcin Sochal, Piotr Białasiewicz, Agata Gabryelska
Abstract:
Circadian rhythm, an internal coordinator of physiological processes is composed of a set of semi-autonomous clocks. Clocks are regulated through the expression of circadian clock genes which form feedback loops, creating an oscillator. The primary loop consists of activators: CLOCK, BMAL1 and repressors: CRY, PER. CLOCK can be substituted by the Neuronal PAS Domain Protein 2 (NPAS2). Orphan nuclear receptor (REV-ERB-α) is a component of the secondary major loop, modulating the expression of BMAL1. Circadian clocks might be disrupted by the obstructive sleep apnea (OSA), which has also been associated with type II diabetes mellitus (DM2). Interestingly, studies suggest that dysregulation of NPAS2 and REV-ERB-α might contribute to the pathophysiology of DM2 as well. The goal of our study was to examine the role of NPAS2 and REV-ERB-α in DM2 in OSA patients. After examination of the clinical data, all participants underwent polysomnography (PSG) to assess their apnea-hypopnea index (AHI). Based on the acquired data participants were assigned to one of 3 groups: OSA (AHI>30, no DM2; n=17 for NPAS2 and 34 for REV-ERB-α), DM2 (AHI>30 + DM2; n=7 for NPAS2 and 15 for REV-ERB-α) and control group (AHI<5, no DM2; n=16 for NPAS2 and 31 for REV-ERB-α). ELISA immunoassay was performed to assess the serum protein level of REV-ERB-α and NPAS2. The only statistically significant difference between groups was observed in NPAS2 protein level (p=0.037). Post-hoc analysis showed significant differences between the OSA and the control group (p=0.017). AHI and NPAS2 level was significantly correlated (r=-0.478, p=0.002) in all groups. A significant correlation was observed between the REV-ERB-α level and sleep efficiency (r=0.617, p=0.005) as well as sleep maintenance efficiency (r=0.645, p=0.003) in the OSA group. We conclude, that NPAS2 is associated with OSA severity and might contribute to metabolic sequelae of this disease. REV-ERB-α on the other hand can influence sleep continuity and efficiency.Keywords: OSA, diabetes mellitus, endocrinology, chronobiology
Procedia PDF Downloads 155509 Recent Developments in the Application of Deep Learning to Stock Market Prediction
Authors: Shraddha Jain Sharma, Ratnalata Gupta
Abstract:
Predicting stock movements in the financial market is both difficult and rewarding. Analysts and academics are increasingly using advanced approaches such as machine learning techniques to anticipate stock price patterns, thanks to the expanding capacity of computing and the recent advent of graphics processing units and tensor processing units. Stock market prediction is a type of time series prediction that is incredibly difficult to do since stock prices are influenced by a variety of financial, socioeconomic, and political factors. Furthermore, even minor mistakes in stock market price forecasts can result in significant losses for companies that employ the findings of stock market price prediction for financial analysis and investment. Soft computing techniques are increasingly being employed for stock market prediction due to their better accuracy than traditional statistical methodologies. The proposed research looks at the need for soft computing techniques in stock market prediction, the numerous soft computing approaches that are important to the field, past work in the area with their prominent features, and the significant problems or issue domain that the area involves. For constructing a predictive model, the major focus is on neural networks and fuzzy logic. The stock market is extremely unpredictable, and it is unquestionably tough to correctly predict based on certain characteristics. This study provides a complete overview of the numerous strategies investigated for high accuracy prediction, with a focus on the most important characteristics.Keywords: stock market prediction, artificial intelligence, artificial neural networks, fuzzy logic, accuracy, deep learning, machine learning, stock price, trading volume
Procedia PDF Downloads 90508 Cyber Security and Risk Assessment of the e-Banking Services
Authors: Aisha F. Bushager
Abstract:
Today we are more exposed than ever to cyber threats and attacks at personal, community, organizational, national, and international levels. More aspects of our lives are operating on computer networks simply because we are living in the fifth domain, which is called the Cyberspace. One of the most sensitive areas that are vulnerable to cyber threats and attacks is the Electronic Banking (e-Banking) area, where the banking sector is providing online banking services to its clients. To be able to obtain the clients trust and encourage them to practice e-Banking, also, to maintain the services provided by the banks and ensure safety, cyber security and risks control should be given a high priority in the e-banking area. The aim of the study is to carry out risk assessment on the e-banking services and determine the cyber threats, cyber attacks, and vulnerabilities that are facing the e-banking area specifically in the Kingdom of Bahrain. To collect relevant data, structured interviews were taken place with e-banking experts in different banks. Then, collected data where used as in input to the risk management framework provided by the National Institute of Standards and Technology (NIST), which was the model used in the study to assess the risks associated with e-banking services. The findings of the study showed that the cyber threats are commonly human errors, technical software or hardware failure, and hackers, on the other hand, the most common attacks facing the e-banking sector were phishing, malware attacks, and denial-of-service. The risks associated with the e-banking services were around the moderate level, however, more controls and countermeasures must be applied to maintain the moderate level of risks. The results of the study will help banks discover their vulnerabilities and maintain their online services, in addition, it will enhance the cyber security and contribute to the management and control of risks that are facing the e-banking sector.Keywords: cyber security, e-banking, risk assessment, threats identification
Procedia PDF Downloads 350507 The Relevance of Shared Cultural Leadership in the Survival of the Language and of the Francophone Culture in a Minority Language Environment
Authors: Lyne Chantal Boudreau, Claudine Auger, Arline Laforest
Abstract:
As an English-speaking country, Canada faces challenges in French-language education. During both editions of a provincial congress on education planned and conducted under shared cultural leadership, three organizers created a Francophone space where, for the first time in the province of New Brunswick (the only officially bilingual province in Canada), a group of stakeholders from the school, post-secondary and community sectors have succeeded in contributing to reflections on specific topics by sharing winning practices to meet the challenges of learning in a minority Francophone environment. Shared cultural leadership is a hybrid between theories of leadership styles in minority communities and theories of shared leadership. Through shared cultural leadership, the goal is simply to guide leadership and to set up all minority leaderships in minority context through shared leadership. This leadership style requires leaders to transition from a hierarchical to a horizontal approach, that is, to an approach where each individual is at the same level. In this exploratory research, it has been demonstrated that shared leadership exercised under the T-learning model best fosters the mobilization of all partners in advancing in-depth knowledge in a particular field while simultaneously allowing learning of the elements related to the domain in question. This session will present how it is possible to mobilize the whole community through leaders who continually develop their knowledge and skills in their specific field but also in related fields. Leaders in this style of management associated to shared cultural leadership acquire the ability to consider solutions to problems from a holistic perspective and to develop a collective power derived from the leadership of each and everyone in a space where all are rallied to promote the ultimate advancement of society.Keywords: education, minority context, shared leadership, t-leaning
Procedia PDF Downloads 247506 Multidimensional Inequality and Deprivation Among Tribal Communities of Andhra Pradesh, India
Authors: Sanjay Sinha, Mohd Umair Khan
Abstract:
The level of income inequality in India has been worrisome as the World Inequality Report termed it as a “poor and unequal country, with an affluent elite”. As important as income is to understand inequality and deprivation, it is just one dimension. But the historical roots and current realities of inequality and deprivation in India lies in many of the non-income dimensions such as housing, nutrition, education, agency, sense of inclusion etc. which are often ignored, especially in solution-oriented research. The level of inequality and deprivation among the tribal is one such case. There is a corpus of literature establishing that the tribal communities in India are disadvantageous on various grounds. Given their rural geography, issues of access and quality of basic facilities such as education and healthcare are often unaddressed. COVID-19 has further exacerbated this challenge and climate change will make it even more worrying. With this background, a succinct measurement tool at the village level is necessary to design short to medium-term actions with reference to risk mitigation for tribal communities. This research paper examines the level of inequality and deprivation among the tribal communities in the rural areas of Andhra Pradesh state of India using a Multidimensional Inequality and Deprivation Index based on the Alkire-Foster methodology. The methodology is theoretically grounded in the capability approach propounded by Amartya Sen, emphasizing on achieving the “beings and doings” (functionings) an individual reason to value. In the index, the authors have five domains, including Livelihood, Food Security, Education, Health and Housing and these domains are divided into sixteen indicators. This assessment is followed by domain-wise short-term and long-term solutions.Keywords: Andhra Pradesh, Alkire-Foster methodology, deprivation, inequality, multidimensionality, poverty, tribal
Procedia PDF Downloads 160505 Fast Bayesian Inference of Multivariate Block-Nearest Neighbor Gaussian Process (NNGP) Models for Large Data
Authors: Carlos Gonzales, Zaida Quiroz, Marcos Prates
Abstract:
Several spatial variables collected at the same location that share a common spatial distribution can be modeled simultaneously through a multivariate geostatistical model that takes into account the correlation between these variables and the spatial autocorrelation. The main goal of this model is to perform spatial prediction of these variables in the region of study. Here we focus on a geostatistical multivariate formulation that relies on sharing common spatial random effect terms. In particular, the first response variable can be modeled by a mean that incorporates a shared random spatial effect, while the other response variables depend on this shared spatial term, in addition to specific random spatial effects. Each spatial random effect is defined through a Gaussian process with a valid covariance function, but in order to improve the computational efficiency when the data are large, each Gaussian process is approximated to a Gaussian random Markov field (GRMF), specifically to the block nearest neighbor Gaussian process (Block-NNGP). This approach involves dividing the spatial domain into several dependent blocks under certain constraints, where the cross blocks allow capturing the spatial dependence on a large scale, while each individual block captures the spatial dependence on a smaller scale. The multivariate geostatistical model belongs to the class of Latent Gaussian Models; thus, to achieve fast Bayesian inference, it is used the integrated nested Laplace approximation (INLA) method. The good performance of the proposed model is shown through simulations and applications for massive data.Keywords: Block-NNGP, geostatistics, gaussian process, GRMF, INLA, multivariate models.
Procedia PDF Downloads 97504 Dynamic Analysis of Mono-Pile: Spectral Element Method
Authors: Rishab Das, Arnab Banerjee, Bappaditya Manna
Abstract:
Mono-pile foundations are often used in soft soils in order to support heavy mega-structures, whereby often these deep footings may undergo dynamic excitation due to many causes like earthquake, wind or wave loads acting on the superstructure, blasting, and unbalanced machines, etc. A comprehensive analytical study is performed to study the dynamics of the mono-pile system embedded in cohesion-less soil. The soil is considered homogeneous and visco-elastic in nature and is analytically modeled using complex springs. Considering the N number of the elements of the pile, the final global stiffness matrix is obtained by using the theories of the spectral element matrix method. Further, statically condensing the intermediate internal nodes of the global stiffness matrix results to a smaller sub matrix containing the nodes experiencing the external translation and rotation, and the stiffness and damping functions (impedance functions) of the embedded piles are determined. Proper plots showing the variation of the real and imaginary parts of these impedance functions with the dimensionless frequency parameter are obtained. The plots obtained from this study are validated by that provided by Novak,1974. Further, the dynamic analysis of the resonator impregnated pile is proposed within this study. Moreover, with the aid of Wood's 1g laboratory scaling law, a proper scaled-down resonator-pile model is 3D printed using PLA material. Dynamic analysis of the scaled model is carried out in the time domain, whereby the lateral loads are imposed on the pile head. The response obtained from the sensors through the LabView software is compared with the proposed theoretical data.Keywords: mono-pile, visco-elastic, impedance, LabView
Procedia PDF Downloads 118503 Faster Pedestrian Recognition Using Deformable Part Models
Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia
Abstract:
Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time
Procedia PDF Downloads 281502 Understanding Cyber Kill Chains: Optimal Allocation of Monitoring Resources Using Cooperative Game Theory
Authors: Roy. H. A. Lindelauf
Abstract:
Cyberattacks are complex processes consisting of multiple interwoven tasks conducted by a set of agents. Interdictions and defenses against such attacks often rely on cyber kill chain (CKC) models. A CKC is a framework that tries to capture the actions taken by a cyber attacker. There exists a growing body of literature on CKCs. Most of this work either a) describes the CKC with respect to one or more specific cyberattacks or b) discusses the tools and technologies used by the attacker at each stage of the CKC. Defenders, facing scarce resources, have to decide where to allocate their resources given the CKC and partial knowledge on the tools and techniques attackers use. In this presentation CKCs are analyzed through the lens of covert projects, i.e., interrelated tasks that have to be conducted by agents (human and/or computer) with the aim of going undetected. Various aspects of covert project models have been studied abundantly in the operations research and game theory domain, think of resource-limited interdiction actions that maximally delay completion times of a weapons project for instance. This presentation has investigated both cooperative and non-cooperative game theoretic covert project models and elucidated their relation to CKC modelling. To view a CKC as a covert project each step in the CKC is broken down into tasks and there are players of which each one is capable of executing a subset of the tasks. Additionally, task inter-dependencies are represented by a schedule. Using multi-glove cooperative games it is shown how a defender can optimize the allocation of his scarce resources (what, where and how to monitor) against an attacker scheduling a CKC. This study presents and compares several cooperative game theoretic solution concepts as metrics for assigning resources to the monitoring of agents.Keywords: cyber defense, cyber kill chain, game theory, information warfare techniques
Procedia PDF Downloads 140501 Regression Analysis in Estimating Stream-Flow and the Effect of Hierarchical Clustering Analysis: A Case Study in Euphrates-Tigris Basin
Authors: Goksel Ezgi Guzey, Bihrat Onoz
Abstract:
The scarcity of streamflow gauging stations and the increasing effects of global warming cause designing water management systems to be very difficult. This study is a significant contribution to assessing regional regression models for estimating streamflow. In this study, simulated meteorological data was related to the observed streamflow data from 1971 to 2020 for 33 stream gauging stations of the Euphrates-Tigris Basin. Ordinary least squares regression was used to predict flow for 2020-2100 with the simulated meteorological data. CORDEX- EURO and CORDEX-MENA domains were used with 0.11 and 0.22 grids, respectively, to estimate climate conditions under certain climate scenarios. Twelve meteorological variables simulated by two regional climate models, RCA4 and RegCM4, were used as independent variables in the ordinary least squares regression, where the observed streamflow was the dependent variable. The variability of streamflow was then calculated with 5-6 meteorological variables and watershed characteristics such as area and height prior to the application. Of the regression analysis of 31 stream gauging stations' data, the stations were subjected to a clustering analysis, which grouped the stations in two clusters in terms of their hydrometeorological properties. Two streamflow equations were found for the two clusters of stream gauging stations for every domain and every regional climate model, which increased the efficiency of streamflow estimation by a range of 10-15% for all the models. This study underlines the importance of homogeneity of a region in estimating streamflow not only in terms of the geographical location but also in terms of the meteorological characteristics of that region.Keywords: hydrology, streamflow estimation, climate change, hydrologic modeling, HBV, hydropower
Procedia PDF Downloads 129500 Prompt Design for Code Generation in Data Analysis Using Large Language Models
Authors: Lu Song Ma Li Zhi
Abstract:
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.Keywords: large language models, prompt design, data analysis, code generation
Procedia PDF Downloads 42499 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics
Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur
Abstract:
Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.Keywords: human machine interface, industrial internet of things, internet of things, optical character recognition, video analytics
Procedia PDF Downloads 109498 Ionic Liquid and Chemical Denaturants Effects on the Fluorescence Properties of the Laccase
Authors: Othman Saoudi
Abstract:
In this work, we have interested in the investigation of the chemical denaturants and synthesized ionic liquids effects on the fluorescence properties of the laccase from Trametes versicolor. The fluorescence properties of the laccase result from the presence of Tryptophan, which has an aromatic core responsible for the absorption in ultra violet domain and the emission of the photons of fluorescence. The effect Pyrrolidinuim Formate ([pyrr][F]) and Morpholinium Formate ([morph][F]) ionic liquids on the laccase behavior for various volumetric fractions are studied. We have shown that the fluorescence spectrum relative to the [pyrr][F] presents a single band with a maximum around 340 nm and a secondary peak at 361 nm for a volumetric fraction of 20% v/v. For concentration superiors to 40%, the fluorescence intensity decreases and a displacement of the peaks toward higher wavelengths has occurred. For the [morph][F], the fluorescence spectrum showed a single band around 340 nm. The intensity of the principal peak decreases for concentration superiors to 20% v/v. From the plot representing the variation of the λₘₐₓ versus the volumetric concentration, we have determined the concentration of the half-transitions C1/2. These concentrations are equal to 42.62% and 40.91% v/v in the presence of [pyrr][F] and [morph][F] respectively. For the chemical denaturation, we have shown that the fluorescence intensity decreases with increasing denaturant concentrations where the maximum of the wavelength of emission shifts toward the higher wavelengths. We have also determined from the spectrum relative to the urea and GdmCl, the unfolding energy, ∆GD. The results show that the variation of the unfolding energy as a function of the denaturant concentrations varies according to the linear regression model. We have demonstrated also that the half-transitions C1/2 have occurred for urea and GdmCl denaturants concentrations around 3.06 and 3.17 M respectively.Keywords: laccase, fluorescence, ionic liquids, chemical denaturants
Procedia PDF Downloads 507497 A Rapid Prototyping Tool for Suspended Biofilm Growth Media
Authors: Erifyli Tsagkari, Stephanie Connelly, Zhaowei Liu, Andrew McBride, William Sloan
Abstract:
Biofilms play an essential role in treating water in biofiltration systems. The biofilm morphology and function are inextricably linked to the hydrodynamics of flow through a filter, and yet engineers rarely explicitly engineer this interaction. We develop a system that links computer simulation and 3-D printing to optimize and rapidly prototype filter media to optimize biofilm function with the hypothesis that biofilm function is intimately linked to the flow passing through the filter. A computational model that numerically solves the incompressible time-dependent Navier Stokes equations coupled to a model for biofilm growth and function is developed. The model is imbedded in an optimization algorithm that allows the model domain to adapt until criteria on biofilm functioning are met. This is applied to optimize the shape of filter media in a simple flow channel to promote biofilm formation. The computer code links directly to a 3-D printer, and this allows us to prototype the design rapidly. Its validity is tested in flow visualization experiments and by microscopy. As proof of concept, the code was constrained to explore a small range of potential filter media, where the medium acts as an obstacle in the flow that sheds a von Karman vortex street that was found to enhance the deposition of bacteria on surfaces downstream. The flow visualization and microscopy in the 3-D printed realization of the flow channel validated the predictions of the model and hence its potential as a design tool. Overall, it is shown that the combination of our computational model and the 3-D printing can be effectively used as a design tool to prototype filter media to optimize biofilm formation.Keywords: biofilm, biofilter, computational model, von karman vortices, 3-D printing.
Procedia PDF Downloads 142496 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution
Authors: Nikolay P. Brayanov, Anna V. Stoynova
Abstract:
Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.Keywords: embedded code generation, embedded C code quality, embedded systems, model-based development
Procedia PDF Downloads 244495 Modelling of a Biomechanical Vertebral System for Seat Ejection in Aircrafts Using Lumped Mass Approach
Authors: R. Unnikrishnan, K. Shankar
Abstract:
In the case of high-speed fighter aircrafts, seat ejection is designed mainly for the safety of the pilot in case of an emergency. Strong windblast due to the high velocity of flight is one main difficulty in clearing the tail of the aircraft. Excessive G-forces generated, immobilizes the pilot from escape. In most of the cases, seats are ejected out of the aircrafts by explosives or by rocket motors attached to the bottom of the seat. Ejection forces are primarily in the vertical direction with the objective of attaining the maximum possible velocity in a specified period of time. The safe ejection parameters are studied to estimate the critical time of ejection for various geometries and velocities of flight. An equivalent analytical 2-dimensional biomechanical model of the human spine has been modelled consisting of vertebrae and intervertebral discs with a lumped mass approach. The 24 vertebrae, which consists of the cervical, thoracic and lumbar regions, in addition to the head mass and the pelvis has been designed as 26 rigid structures and the intervertebral discs are assumed as 25 flexible joint structures. The rigid structures are modelled as mass elements and the flexible joints as spring and damper elements. Here, the motions are restricted only in the mid-sagittal plane to form a 26 degree of freedom system. The equations of motions are derived for translational movement of the spinal column. An ejection force with a linearly increasing acceleration profile is applied as vertical base excitation on to the pelvis. The dynamic vibrational response of each vertebra in time-domain is estimated.Keywords: biomechanical model, lumped mass, seat ejection, vibrational response
Procedia PDF Downloads 231494 [Keynote Talk]: Caught in the Tractorbeam of Larger Influences: The Filtration of Innovation in Education Technology Design
Authors: Justin D. Olmanson, Fitsum Abebe, Valerie Jones, Eric Kyle, Xianquan Liu, Katherine Robbins, Guieswende Rouamba
Abstract:
The history of education technology--and designing, adapting, and adopting technologies for use in educational spaces--is nuanced, complex, and dynamic. Yet, despite a range of continually emerging technologies, the design and development process often yields results that appear quite similar in terms of affordances and interactions. Through this study we (1) verify the extent to which designs have been constrained, (2) consider what might account for it, and (3) offer a way forward in terms of how we might identify and strategically sidestep these influences--thereby increasing the diversity of our designs with a given technology or within a particular learning domain. We begin our inquiry from the perspective that a host of co-influencing elements, fields, and meta narratives converge on the education technology design process to exert a tangible, often homogenizing effect on the resultant designs. We identify several elements that influence design in often implicit or unquestioned ways (e.g. curriculum, learning theory, economics, learning context, pedagogy), we describe our methodology for identifying the elemental positionality embedded in a design, we direct our analysis to a particular subset of technologies in the field of literacy, and unpack our findings. Our early analysis suggests that the majority of education technologies designed for use/used in US public schools are heavily influenced by a handful of mainstream theories and meta narratives. These findings have implications for how we approach the education technology design process--which we use to suggest alternative methods for designing/ developing with emerging technologies. Our analytical process and re conceptualized design process hold the potential to diversify the ways emerging and established technologies get incorporated into our designs.Keywords: curriculum, design, innovation, meta narratives
Procedia PDF Downloads 509493 A Mixed Methods Research Design for the Development of the Xenia Higher Education Institutions' Inclusiveness Index
Authors: Achilles Kameas, Eleni Georgakakou, Anna Lisa Amodeo, Aideen Quilty, Aisling Malone, Roberta Albertazzi, Moises Carmona, Concetta Esposito, Ruben David Fernandez Carrasco, Carmela Ferrara, Francesco Garzillo, Mojca Pusnik, Maria Cristina Scarano
Abstract:
While researchers, especially in academia, study and research the phenomena of inclusion of sexual minority and gender marginalized groups, seldom the European Higher Education Institutions (HEI) act on lowering the cultural and educational barriers to their proactive inclusion. The challenge in European HEIs is that gender, and sexual orientation discrimination remains an issue not adequately addressed. Following a mixed methods research design of quantitative and qualitative research techniques and tools, which is applied in five (5) European countries (Italy, Greece, Ireland, Slovenia, and Spain) and that combines desk research, evaluation, and weighting processes for a Matrix-based on Objective indicators and Survey for students and staff of the HEI to gauge the perception of inclusiveness in the HEI context, XENIA HEI Inclusiveness Index is an instrument that will allow universities to gauge and assess their inclusiveness in the domain of discrimination and exclusion based on gender identity and sexual orientation. The index will allow capturing the depth and reach of policies, programmes, and initiatives of HEIs in tackling the phenomena and dynamics of exclusion of LGBT+ (lesbian, gay, bisexual, trans, and other marginalized groups on the basis of gender and sexual identity) and cisgender women exposed to the risk of discrimination.Keywords: gender identity, higher education, LGBT+ rights, XENIA inclusiveness index
Procedia PDF Downloads 163492 Bilingual Siblings and Dynamic Family Language Policies in Italian/English Families
Authors: Daniela Panico
Abstract:
Framed by language socialization and family language policy theories, the present study explores the ways the language choice patterns of bilingual siblings contribute to the shaping of the language environment and the language practices of Italian/English families residing in Sydney. The main source of data is video recordings of naturally occurring parent-children and child-to-child interactions during everyday routines (i.e., family mealtimes and siblings playtime) in the home environment. Recurrent interactional practices are analyzed in detail through a conversational analytical approach. This presentation focuses on the interactional trajectories developing during the negotiation of language choices between all family members and between siblings in face-to-face interactions. Fine-grained analysis is performed on language negotiation sequences of multiparty bilingual conversations in order to uncover the sequential patterns through which a) the children respond to the parental strategies aiming to minority language maintenance, and b) the siblings influence each other’s language use and choice (e.g., older siblings positioning themselves as language teachers and language brokers, younger siblings accepting the role of apprentices). The findings show that, along with the parents, children are active socializing agents in the family and, with their linguistic behavior, they contribute to the establishment of a bilingual or a monolingual context in the home. Moreover, by orienting themselves towards the use of one or the other language in family talk, bilingual siblings are a major internal micro force in the language ecology of a bilingual family and can strongly support language maintenance or language shift processes in such domain. Overall, the study provides insights into the dynamic ways in which family language policy is interactionally negotiated and instantiated in bilingual homes as well as the challenges of intergenerational language transmission.Keywords: bilingual siblings, family interactions, family language policy, language maintenance
Procedia PDF Downloads 191491 Towards Designing of a Potential New HIV-1 Protease Inhibitor Using Quantitative Structure-Activity Relationship Study in Combination with Molecular Docking and Molecular Dynamics Simulations
Authors: Mouna Baassi, Mohamed Moussaoui, Hatim Soufi, Sanchaita RajkhowaI, Ashwani Sharma, Subrata Sinha, Said Belaaouad
Abstract:
Human Immunodeficiency Virus type 1 protease (HIV-1 PR) is one of the most challenging targets of antiretroviral therapy used in the treatment of AIDS-infected people. The performance of protease inhibitors (PIs) is limited by the development of protease mutations that can promote resistance to the treatment. The current study was carried out using statistics and bioinformatics tools. A series of thirty-three compounds with known enzymatic inhibitory activities against HIV-1 protease was used in this paper to build a mathematical model relating the structure to the biological activity. These compounds were designed by software; their descriptors were computed using various tools, such as Gaussian, Chem3D, ChemSketch and MarvinSketch. Computational methods generated the best model based on its statistical parameters. The model’s applicability domain (AD) was elaborated. Furthermore, one compound has been proposed as efficient against HIV-1 protease with comparable biological activity to the existing ones; this drug candidate was evaluated using ADMET properties and Lipinski’s rule. Molecular Docking performed on Wild Type and Mutant Type HIV-1 proteases allowed the investigation of the interaction types displayed between the proteases and the ligands, Darunavir (DRV) and the new drug (ND). Molecular dynamics simulation was also used in order to investigate the complexes’ stability, allowing a comparative study of the performance of both ligands (DRV & ND). Our study suggested that the new molecule showed comparable results to that of Darunavir and may be used for further experimental studies. Our study may also be used as a pipeline to search and design new potential inhibitors of HIV-1 proteases.Keywords: QSAR, ADMET properties, molecular docking, molecular dynamics simulation.
Procedia PDF Downloads 40490 The Study of Self-Care Regarding to the Valuable Living in Thai Elderly
Authors: Pannathorn Chachvarat, Smarnjit Piromrun
Abstract:
Aging is the reality for the future world. An urgent priority for the development of the elderlies’ quality living is needed. The promotion of quality the elderly to live longer in their dignity and being independence are essential. The objective of this descriptive research was to study the self-care regarding to the valuable living in Thai elderly. The randomized sample was 100 elderly who live in Muang district of Phayao province. The tools included 2 parts; 1) Personal data (gender, age, income, occupation, marital status, living condition and disease), and 2) the self-care regarding to the valuable living questionnaire consisted of 3 domains, physical (21items), spiritual (13 items) and social domain (12 items). The content validity tool was tested the IOC ranged between 0.60 – 1.00 and the reliability test, Cronbach Alpha was 0.82. The research found that; The most participants were female (60 %), Farmer (37%), and underlying disease (65 %). The range of age was 68 years. Overall of the self-care regarding to the valuable living of physical, spiritual and social were at the high level.The highest level of physical activities was self-taking bath twice a day (morning and evening), and slept at least 5-6 hours at night time.The highest level of spirit activities was a good member of the family, contributions to persons in family, good emotion. Additionally were enjoyable, accepting changes in the body such as the dry skin and the blurred vision, accepting the roles and duties in taking care of house and grandchildren, selecting the applicable activities and practice according to religious Buddhateachingfor the happiness and meditated life.The highest of the social activities were the good relationship between other elderlies and family members, happy to help social activities as of their capacity, and being happy to help other people who have problems.Keywords: self-care, valuable living, elderly, Thai
Procedia PDF Downloads 286