Search results for: computer aided engineering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5218

Search results for: computer aided engineering

4318 Awareness and Utilization of Social Network Tools among Agricultural Science Students in Colleges of Education in Ogun State, Nigeria

Authors: Adebowale Olukayode Efunnowo

Abstract:

This study was carried out to assess the awareness and utilization of Social Network Tools (SNTs) among agricultural science students in Colleges of Education in Ogun State, Nigeria. Simple random sampling techniques were used to select 280 respondents from the study area. Descriptive statistics was used to describe the objectives while Pearson Product Moment Correlation was used to test the hypothesis. The result showed that the majority (71.8%) of the respondents were single, with a mean age of 20 years. Almost all (95.7%) the respondents were aware of Facebook and 2go as a Social Network Tools (SNTs) while 85.0% of the respondents were not aware of Blackplanet, LinkedIn, MyHeritage and Bebo. Many (41.1%) of the respondents had views that using SNTs can enhance extensive literature survey, increase internet browsing potential, promote teaching proficiency, and update on outcomes of researches. However, 51.4% of the respondents perceived that SNTs usage as what is meant for the lecturers/adults only while 16.1% considered it as mainly used by internet fraudsters. Findings revealed that about 50.0% of the respondents browsed Facebook and 2go daily while more than 80% of the respondents used Blackplanet, MyHeritage, Skyrock, Bebo, LinkedIn and My YearBook as the need arise. Major constraints to the awareness and utilization of SNTs were high cost and poor quality of ICTs facilities (77.1%), epileptic power supply (75.0%), inadequate telecommunication infrastructure (71.1%), low technical know-how (62.9%) and inadequate computer knowledge (61.1%). The result of PPMC analysis showed that there was an inverse relationship between constraints and utilization of SNTs at p < 0.05. It can be concluded that constraints affect efficient and effective utilization of SNTs in the study area. It is hereby recommended that management of colleges of education and agricultural institutes should provide good internet connectivity, computer facilities, and alternative power supply in order to increase the awareness and utilization of SNTs among students.

Keywords: awareness, utilization, social network tools, constraints, students

Procedia PDF Downloads 336
4317 Proposal for Knowledge-Based Virtual Community System (KBVCS) for Enhancing Knowledge Sharing in Mechatronics System Diagnostic and Repair

Authors: Adetoba B. Tiwalola, Adedeji W. Oyediran, Yekini N. Asafe, Akinwole A. Kikelomo

Abstract:

Mechatronics is synergistic integration of mechanical engineering, with electronics and intelligent computer control in the design and manufacturing of industrial products and processes. Automobile (auto car, motor car or car is a wheeled motor vehicle used for transporting passengers, which also carries its own engine or motor) is a mechatronic system which served as major means of transportation around the world. Virtually all community has a need for automobile. This makes automobile issues as related to diagnostic and repair interesting to all communities. Consequent to the diversification of skill in diagnosing automobile faults and approaches in solving some problems and innovation in automobile industry. It is appropriate to say that repair and diagnostic of automobile will be better enhanced if community has opportunity of sharing knowledge and idea globally. This paper discussed the desirable elements in automobile as mechatronics system and present conceptual framework of virtual community model for knowledge sharing among automobile users.

Keywords: automobile, automobile users, knowledge sharing, mechatronics system, virtual community

Procedia PDF Downloads 427
4316 Parametric Study of Vertical Diffusion Stills for Water Desalination

Authors: A. Seleem, M. Mortada, M. El-Morsi, M. Younan

Abstract:

Diffusion stills have been effective in water desalination. The present work represents a model of the distillation process by using vertical single-effect diffusion stills. A semi-analytical model has been developed to model the process. A software computer code using Engineering Equation Solver EES software has been developed to solve the equations of the developed model. An experimental setup has been constructed, and used for the validation of the model. The model is also validated against former literature results. The results obtained from the present experimental test rig, and the data from the literature, have been compared with the results of the code to find its best range of validity. In addition, a parametric analysis of the system has been developed using the model to determine the effect of operating conditions on the system's performance. The dominant parameters that affect the productivity of the still are the hot plate temperature that ranges from (55-90 °C) and feed flow rate in range of (0.00694-0.0211 kg/m2-s).

Keywords: analytical model, solar distillation, sustainable water systems, vertical diffusion still

Procedia PDF Downloads 393
4315 Improving Lane Detection for Autonomous Vehicles Using Deep Transfer Learning

Authors: Richard O’Riordan, Saritha Unnikrishnan

Abstract:

Autonomous Vehicles (AVs) are incorporating an increasing number of ADAS features, including automated lane-keeping systems. In recent years, many research papers into lane detection algorithms have been published, varying from computer vision techniques to deep learning methods. The transition from lower levels of autonomy defined in the SAE framework and the progression to higher autonomy levels requires increasingly complex models and algorithms that must be highly reliable in their operation and functionality capacities. Furthermore, these algorithms have no room for error when operating at high levels of autonomy. Although the current research details existing computer vision and deep learning algorithms and their methodologies and individual results, the research also details challenges faced by the algorithms and the resources needed to operate, along with shortcomings experienced during their detection of lanes in certain weather and lighting conditions. This paper will explore these shortcomings and attempt to implement a lane detection algorithm that could be used to achieve improvements in AV lane detection systems. This paper uses a pre-trained LaneNet model to detect lane or non-lane pixels using binary segmentation as the base detection method using an existing dataset BDD100k followed by a custom dataset generated locally. The selected roads will be modern well-laid roads with up-to-date infrastructure and lane markings, while the second road network will be an older road with infrastructure and lane markings reflecting the road network's age. The performance of the proposed method will be evaluated on the custom dataset to compare its performance to the BDD100k dataset. In summary, this paper will use Transfer Learning to provide a fast and robust lane detection algorithm that can handle various road conditions and provide accurate lane detection.

Keywords: ADAS, autonomous vehicles, deep learning, LaneNet, lane detection

Procedia PDF Downloads 83
4314 Impact of PV Distributed Generation on Loop Distribution Network at Saudi Electricity Company Substation in Riyadh City

Authors: Mohammed Alruwaili‬

Abstract:

Nowadays, renewable energy resources are playing an important role in replacing traditional energy resources such as fossil fuels by integrating solar energy with conventional energy. Concerns about the environment led to an intensive search for a renewable energy source. The Rapid growth of distributed energy resources will have prompted increasing interest in the integrated distributing network in the Kingdom of Saudi Arabia next few years, especially after the adoption of new laws and regulations in this regard. Photovoltaic energy is one of the promising renewable energy sources that has grown rapidly worldwide in the past few years and can be used to produce electrical energy through the photovoltaic process. The main objective of the research is to study the impact of PV in distribution networks based on real data and details. In this research, site survey and computer simulation will be dealt with using the well-known computer program software ETAB to simulate the input of electrical distribution lines with other variable inputs such as the levels of solar radiation and the field study that represent the prevailing conditions and conditions in Diriah, Riyadh region, Saudi Arabia. In addition, the impact of adding distributed generation units (DGs) to the distribution network, including solar photovoltaic (PV), will be studied and assessed for the impact of adding different power capacities. The result has been achieved with less power loss in the loop distribution network from the current condition by more than 69% increase in network power loss. However, the studied network contains 78 buses. It is hoped from this research that the efficiency, performance, quality and reliability by having an enhancement in power loss and voltage profile of the distribution networks in Riyadh City. Simulation results prove that the applied method can illustrate the positive impact of PV in loop distribution generation.

Keywords: renewable energy, smart grid, efficiency, distribution network

Procedia PDF Downloads 120
4313 Innovation Outputs from Higher Education Institutions: A Case Study of the University of Waterloo, Canada

Authors: Wendy De Gomez

Abstract:

The University of Waterloo is situated in central Canada in the Province of Ontario- one hour from the metropolitan city of Toronto. For over 30 years, it has held Canada’s top spot as the most innovative university; and has been consistently ranked in the top 25 computer science and top 50 engineering schools in the world. Waterloo benefits from the federal government’s over 100 domestic innovation policies which have assisted in the country’s 15th place global ranking in the World Intellectual Property Organization’s (WIPO) 2022 Global Innovation Index. Yet undoubtedly, the University of Waterloo’s unique characteristics are what propels its innovative creativeness forward. This paper will provide a contextual definition of innovation in higher education and then demonstrate the five operational attributes that contribute to the University of Waterloo’s innovative reputation. The methodology is based on statistical analyses obtained from ranking bodies such as the QS World University Rankings, a secondary literature review related to higher education innovation in Canada, and case studies that exhibit the operationalization of the attributes outlined below. The first attribute is geography. Specifically, the paper investigates the network structure effect of the Toronto-Waterloo high-tech corridor and the resultant industrial relationships built there. The second attribute is University Policy 73-Intellectal Property Rights. This creator-owned policy grants all ownership to the creator/inventor regardless of the use of the University of Waterloo property or funding. Essentially, through the incentivization of IP ownership by all researchers, further commercialization and entrepreneurship are formed. Third, this IP policy works hand in hand with world-renowned business incubators such as the Accelerator Centre in the dedicated research and technology park and velocity, a 14-year-old facility that equips and guides founders to build and scale companies. Communitech, a 25-year-old provincially backed facility in the region, also works closely with the University of Waterloo to build strong teams, access capital, and commercialize products. Fourth, Waterloo’s co-operative education program contributes 31% of all co-op participants to the Canadian economy. Home to the world’s largest co-operative education program, data shows that over 7,000 from around the world recruit Waterloo students for short- and long-term placements- directly contributing to the student’s ability to learn and optimize essential employment skills when they graduate. Finally, the students themselves at Waterloo are exceptional. The entrance average ranges from the low 80s to the mid-90s depending on the program. In computer, electrical, mechanical, mechatronics, and systems design engineering, to have a 66% chance of acceptance, the applicant’s average must be 95% or above. Singularly, none of these five attributes could lead to the university’s outstanding track record of innovative creativity, but when bundled up into a 1000 acre- 100 building main campus with 6 academic faculties, 40,000+ students, and over 1300 world-class faculty, the recipe for success becomes quite evident.

Keywords: IP policy, higher education, economy, innovation

Procedia PDF Downloads 55
4312 Calculation Analysis of an Axial Compressor Supersonic Stage Impeller

Authors: Y. Galerkin, E. Popova, K. Soldatova

Abstract:

There is an evident trend to elevate pressure ratio of a single stage of a turbo compressors - axial compressors in particular. Whilst there was an opinion recently that a pressure ratio 1,9 was a reasonable limit, later appeared information on successful modeling tested of stages with pressure ratio up to 2,8. The Authors recon that lack of information on high pressure stages makes actual a study of rational choice of design parameters before high supersonic flow problems solving. The computer program of an engineering type was developed. Below is presented a sample of its application to study possible parameters of the impeller of the stage with pressure ratio π*=3,0. Influence of two main design parameters on expected efficiency, periphery blade speed and flow structure is demonstrated. The results had lead to choose a variant for further analysis and improvement by CFD methods.

Keywords: supersonic stage, impeller, efficiency, flow rate coefficient, work coefficient, loss coefficient, oblique shock, direct shock

Procedia PDF Downloads 449
4311 Parallel Multisplitting Methods for Differential Systems

Authors: Malika El Kyal, Ahmed Machmoum

Abstract:

We prove the superlinear convergence of asynchronous multi-splitting methods applied to differential equations. This study is based on the technique of nested sets. It permits to specify kind of the convergence in the asynchronous mode.The main characteristic of an asynchronous mode is that the local algorithm not have to wait at predetermined messages to become available. We allow some processors to communicate more frequently than others, and we allow the communication delays to be substantial and unpredictable. Note that synchronous algorithms in the computer science sense are particular cases of our formulation of asynchronous one.

Keywords: parallel methods, asynchronous mode, multisplitting, ODE

Procedia PDF Downloads 508
4310 High Performance Computing and Big Data Analytics

Authors: Branci Sarra, Branci Saadia

Abstract:

Because of the multiplied data growth, many computer science tools have been developed to process and analyze these Big Data. High-performance computing architectures have been designed to meet the treatment needs of Big Data (view transaction processing standpoint, strategic, and tactical analytics). The purpose of this article is to provide a historical and global perspective on the recent trend of high-performance computing architectures especially what has a relation with Analytics and Data Mining.

Keywords: high performance computing, HPC, big data, data analysis

Procedia PDF Downloads 500
4309 Scheduling Residential Daily Energy Consumption Using Bi-criteria Optimization Methods

Authors: Li-hsing Shih, Tzu-hsun Yen

Abstract:

Because of the long-term commitment to net zero carbon emission, utility companies include more renewable energy supply, which generates electricity with time and weather restrictions. This leads to time-of-use electricity pricing to reflect the actual cost of energy supply. From an end-user point of view, better residential energy management is needed to incorporate the time-of-use prices and assist end users in scheduling their daily use of electricity. This study uses bi-criteria optimization methods to schedule daily energy consumption by minimizing the electricity cost and maximizing the comfort of end users. Different from most previous research, this study schedules users’ activities rather than household appliances to have better measures of users’ comfort/satisfaction. The relation between each activity and the use of different appliances could be defined by users. The comfort level is at the highest when the time and duration of an activity completely meet the user’s expectation, and the comfort level decreases when the time and duration do not meet expectations. A questionnaire survey was conducted to collect data for establishing regression models that describe users’ comfort levels when the execution time and duration of activities are different from user expectations. Six regression models representing the comfort levels for six types of activities were established using the responses to the questionnaire survey. A computer program is developed to evaluate electricity cost and the comfort level for each feasible schedule and then find the non-dominated schedules. The Epsilon constraint method is used to find the optimal schedule out of the non-dominated schedules. A hypothetical case is presented to demonstrate the effectiveness of the proposed approach and the computer program. Using the program, users can obtain the optimal schedule of daily energy consumption by inputting the intended time and duration of activities and the given time-of-use electricity prices.

Keywords: bi-criteria optimization, energy consumption, time-of-use price, scheduling

Procedia PDF Downloads 43
4308 Image Based Landing Solutions for Large Passenger Aircraft

Authors: Thierry Sammour Sawaya, Heikki Deschacht

Abstract:

In commercial aircraft operations, almost half of the accidents happen during approach or landing phases. Automatic guidance and automatic landings have proven to bring significant safety value added for this challenging landing phase. This is why Airbus and ScioTeq have decided to work together to explore the capability of image-based landing solutions as additional landing aids to further expand the possibility to perform automatic approach and landing to runways where the current guiding systems are either not fitted or not optimum. Current systems for automated landing often depend on radio signals provided by airport ground infrastructure on the airport or satellite coverage. In addition, these radio signals may not always be available with the integrity and performance required for safe automatic landing. Being independent from these radio signals would widen the operations possibilities and increase the number of automated landings. Airbus and ScioTeq are joining their expertise in the field of Computer Vision in the European Program called Clean Sky 2 Large Passenger Aircraft, in which they are leading the IMBALS (IMage BAsed Landing Solutions) project. The ultimate goal of this project is to demonstrate, develop, validate and verify a certifiable automatic landing system guiding an airplane during the approach and landing phases based on an onboard camera system capturing images, enabling automatic landing independent from radio signals and without precision instrument for landing. In the frame of this project, ScioTeq is responsible for the development of the Image Processing Platform (IPP), while Airbus is responsible for defining the functional and system requirements as well as the testing and integration of the developed equipment in a Large Passenger Aircraft representative environment. The aim of this paper will be to describe the system as well as the associated methods and tools developed for validation and verification.

Keywords: aircraft landing system, aircraft safety, autoland, avionic system, computer vision, image processing

Procedia PDF Downloads 78
4307 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model

Authors: Anshika Kankane, Dongshik Kang

Abstract:

Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.

Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching

Procedia PDF Downloads 88
4306 Sentiment Analysis in Social Networks Sites Based on a Bibliometrics Analysis: A Comprehensive Analysis and Trends for Future Research Planning

Authors: Jehan Fahim M. Alsulami

Abstract:

Academic research about sentiment analysis in sentiment analysis has obtained significant advancement over recent years and is flourishing from the collection of knowledge provided by various academic disciplines. In the current study, the status and development trend of the field of sentiment analysis in social networks is evaluated through a bibliometric analysis of academic publications. In particular, the distributions of publications and citations, the distribution of subject, predominant journals, authors, countries are analyzed. The collaboration degree is applied to measure scientific connections from different aspects. Moreover, the keyword co-occurrence analysis is used to find out the major research topics and their evolutions throughout the time span. The area of sentiment analysis in social networks has gained growing attention in academia, with computer science and engineering as the top main research subjects. China and the USA provide the most to the area development. Authors prefer to collaborate more with those within the same nation. Among the research topics, newly risen topics such as COVID-19, customer satisfaction are discovered.

Keywords: bibliometric analysis, sentiment analysis, social networks, social media

Procedia PDF Downloads 190
4305 Ethical Issues around Online Marketing to Children

Authors: Chris Preston

Abstract:

As we devise ever more sophisticated methods of on-line marketing, devising systems that are able to reach into the everyday lives of consumers, we are confronted by a generation of children who face unprecedented intervention by commercial organisations into young minds, via electronic devices, and whether by computer, tablet or phone, such children have been somehow reduced to the status of their devices, with little regard for their well being as individuals. This discussion paper seeks to draw attention to such practice and questions the ethics of digital marketing methods.

Keywords: online marketing to children, online research of children, online targeting of children, consumer rights, ethics

Procedia PDF Downloads 370
4304 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 88
4303 Relevance of Technology on Education

Authors: Felicia K. Oluwalola

Abstract:

This paper examines the relevance of technology on education. It identified the concept of technology on education, bringing real-world learning to the classroom situation, examples of where technology can be used. This study established the fact that technology facilitates students learning compared with traditional method of teaching. It was recommended that the teachers should use technology to supplement, not replace, other instructional modes. It should be used in conjunction with hands-on labs and activities that also address the concepts targeted by the technology. Also, technology should be students centered and not teachers centered.

Keywords: computer, simulation, classroom teaching, education

Procedia PDF Downloads 437
4302 Factors Affecting Physical Activity among University Students of Different Fields of Study

Authors: Robert Dutkiewicz, Monika Szpringer, Mariola Wojciechowska

Abstract:

Physical activity is one of the factors greatly influencing healthy lifestyle. The recent research into physical activity of the Polish society reveals that contribution of physical culture to healthy lifestyle is insufficient. Students, regardless of age, spend most of free-time in front of a TV or computer. The research attempted to identify the level of physical activity and healthy lifestyle among students of medical sciences and other students doing their teaching degrees. The findings of physical activity research conducted in 2014, which covered 364 students of medical sciences and future teachers from the University of Jan Kochanowski in Kielce were analysed. The research involved the method of diagnostic survey based on a questionnaire. It attempted to establish to what extent such factors as the field of studies, the place of residence and BMI affect students’ physical activity. Empirical material was analysed by means of SPSS/PC, the leading statistical software. The field of study significantly influences physical activity of the respondents. The students of physiotherapy and public health tend to be more physically active than students of biology and geography: 46.8% students of geography and 51.8 % biology students seldom take up physical activity. Obesity and overweight are currently serious problems of university students: 6.6% of them are obese and 19% overweight. It is alarming that these students are not willing to find ways to be more physically active. Most of the obese and overweight respondents study biology or geography and live in a rural area. Unequal chances in terms of youth physical culture are determined by the differences between rural and urban environments. Young people living in rural areas are less physically active, particularly in terms of the frequency and the amount of time devoted to physical activity. This is caused by poor infrastructure to perform physical activity, the lack of or limited number of sports clubs and centres. It is thought-provoking that most of the students claim that they do not have enough time to do sports or other activities, but at the same time they spend a lot of time at a computer or watching TV.

Keywords: BMI, healthy lifestyle, sports activity, students

Procedia PDF Downloads 476
4301 Brain-Computer Interfaces That Use Electroencephalography

Authors: Arda Ozkurt, Ozlem Bozkurt

Abstract:

Brain-computer interfaces (BCIs) are devices that output commands by interpreting the data collected from the brain. Electroencephalography (EEG) is a non-invasive method to measure the brain's electrical activity. Since it was invented by Hans Berger in 1929, it has led to many neurological discoveries and has become one of the essential components of non-invasive measuring methods. Despite the fact that it has a low spatial resolution -meaning it is able to detect when a group of neurons fires at the same time-, it is a non-invasive method, making it easy to use without possessing any risks. In EEG, electrodes are placed on the scalp, and the voltage difference between a minimum of two electrodes is recorded, which is then used to accomplish the intended task. The recordings of EEGs include, but are not limited to, the currents along dendrites from synapses to the soma, the action potentials along the axons connecting neurons, and the currents through the synaptic clefts connecting axons with dendrites. However, there are some sources of noise that may affect the reliability of the EEG signals as it is a non-invasive method. For instance, the noise from the EEG equipment, the leads, and the signals coming from the subject -such as the activity of the heart or muscle movements- affect the signals detected by the electrodes of the EEG. However, new techniques have been developed to differentiate between those signals and the intended ones. Furthermore, an EEG device is not enough to analyze the data from the brain to be used by the BCI implication. Because the EEG signal is very complex, to analyze it, artificial intelligence algorithms are required. These algorithms convert complex data into meaningful and useful information for neuroscientists to use the data to design BCI devices. Even though for neurological diseases which require highly precise data, invasive BCIs are needed; non-invasive BCIs - such as EEGs - are used in many cases to help disabled people's lives or even to ease people's lives by helping them with basic tasks. For example, EEG is used to detect before a seizure occurs in epilepsy patients, which can then prevent the seizure with the help of a BCI device. Overall, EEG is a commonly used non-invasive BCI technique that has helped develop BCIs and will continue to be used to detect data to ease people's lives as more BCI techniques will be developed in the future.

Keywords: BCI, EEG, non-invasive, spatial resolution

Procedia PDF Downloads 52
4300 Computer Based Identification of Possible Molecular Targets for Induction of Drug Resistance Reversion in Multidrug Resistant Mycobacterium Tuberculosis

Authors: Oleg Reva, Ilya Korotetskiy, Marina Lankina, Murat Kulmanov, Aleksandr Ilin

Abstract:

Molecular docking approaches are widely used for design of new antibiotics and modeling of antibacterial activities of numerous ligands which bind specifically to active centers of indispensable enzymes and/or key signaling proteins of pathogens. Widespread drug resistance among pathogenic microorganisms calls for development of new antibiotics specifically targeting important metabolic and information pathways. A generally recognized problem is that almost all molecular targets have been identified already and it is getting more and more difficult to design innovative antibacterial compounds to combat the drug resistance. A promising way to overcome the drug resistance problem is an induction of reversion of drug resistance by supplementary medicines to improve the efficacy of the conventional antibiotics. In contrast to well established computer-based drug design, modeling of drug resistance reversion still is in its infancy. In this work, we proposed an approach to identification of compensatory genetic variants reducing the fitness cost associated with the acquisition of drug resistance by pathogenic bacteria. The approach was based on an analysis of the population genetic of Mycobacterium tuberculosis and on results of experimental modeling of the drug resistance reversion induced by a new anti-tuberculosis drug FS-1. The latter drug is an iodine-containing nanomolecular complex that passed clinical trials and was admitted as a new medicine against MDR-TB in Kazakhstan. Isolates of M. tuberculosis obtained on different stages of the clinical trials and also from laboratory animals infected with MDR-TB strain were characterized by antibiotic resistance, and their genomes were sequenced by the paired-end Illumina HiSeq 2000 technology. A steady increase in sensitivity to conventional anti-tuberculosis antibiotics in series of isolated treated with FS-1 was registered despite the fact that the canonical drug resistance mutations identified in the genomes of these isolates remained intact. It was hypothesized that the drug resistance phenotype in M. tuberculosis requires an adjustment of activities of many genes to compensate the fitness cost of the drug resistance mutations. FS-1 cased an aggravation of the fitness cost and removal of the drug-resistant variants of M. tuberculosis from the population. This process caused a significant increase in genetic heterogeneity of the Mtb population that was not observed in the positive and negative controls (infected laboratory animals left untreated and treated solely with the antibiotics). A large-scale search for linkage disequilibrium associations between the drug resistance mutations and genetic variants in other genomic loci allowed identification of target proteins, which could be influenced by supplementary drugs to increase the fitness cost of the drug resistance and deprive the drug-resistant bacterial variants of their competitiveness in the population. The approach will be used to improve the efficacy of FS-1 and also for computer-based design of new drugs to combat drug-resistant infections.

Keywords: complete genome sequencing, computational modeling, drug resistance reversion, Mycobacterium tuberculosis

Procedia PDF Downloads 247
4299 Application of Optical Method for Calcul of Deformed Object Samples

Authors: R. Daira

Abstract:

The electronic speckle interferometry technique used to measure the deformations of scatterers process is based on the subtraction of interference patterns. A speckle image is first recorded before deformation of the object in the RAM of a computer, after a second deflection. The square of the difference between two images showing correlation fringes observable in real time directly on monitor. The interpretation these fringes to determine the deformation. In this paper, we present experimental results of deformation out of the plane of two samples in aluminum, electronic boards and stainless steel.

Keywords: optical method, holography, interferometry, deformation

Procedia PDF Downloads 386
4298 Optimal Pressure Control and Burst Detection for Sustainable Water Management

Authors: G. K. Viswanadh, B. Rajasekhar, G. Venkata Ramana

Abstract:

Water distribution networks play a vital role in ensuring a reliable supply of clean water to urban areas. However, they face several challenges, including pressure control, pump speed optimization, and burst event detection. This paper combines insights from two studies to address these critical issues in Water distribution networks, focusing on the specific context of Kapra Municipality, India. The first part of this research concentrates on optimizing pressure control and pump speed in complex Water distribution networks. It utilizes the EPANET- MATLAB Toolkit to integrate EPANET functionalities into the MATLAB environment, offering a comprehensive approach to network analysis. By optimizing Pressure Reduce Valves (PRVs) and variable speed pumps (VSPs), this study achieves remarkable results. In the Benchmark Water Distribution System (WDS), the proposed PRV optimization algorithm reduces average leakage by 20.64%, surpassing the previous achievement of 16.07%. When applied to the South-Central and East zone WDS of Kapra Municipality, it identifies PRV locations that were previously missed by existing algorithms, resulting in average leakage reductions of 22.04% and 10.47%. These reductions translate to significant daily Water savings, enhancing Water supply reliability and reducing energy consumption. The second part of this research addresses the pressing issue of burst event detection and localization within the Water Distribution System. Burst events are a major contributor to Water losses and repair expenses. The study employs wireless sensor technology to monitor pressure and flow rate in real time, enabling the detection of pipeline abnormalities, particularly burst events. The methodology relies on transient analysis of pressure signals, utilizing Cumulative Sum and Wavelet analysis techniques to robustly identify burst occurrences. To enhance precision, burst event localization is achieved through meticulous analysis of time differentials in the arrival of negative pressure waveforms across distinct pressure sensing points, aided by nodal matrix analysis. To evaluate the effectiveness of this methodology, a PVC Water pipeline test bed is employed, demonstrating the algorithm's success in detecting pipeline burst events at flow rates of 2-3 l/s. Remarkably, the algorithm achieves a localization error of merely 3 meters, outperforming previously established algorithms. This research presents a significant advancement in efficient burst event detection and localization within Water pipelines, holding the potential to markedly curtail Water losses and the concomitant financial implications. In conclusion, this combined research addresses critical challenges in Water distribution networks, offering solutions for optimizing pressure control, pump speed, burst event detection, and localization. These findings contribute to the enhancement of Water Distribution System, resulting in improved Water supply reliability, reduced Water losses, and substantial cost savings. The integrated approach presented in this paper holds promise for municipalities and utilities seeking to improve the efficiency and sustainability of their Water distribution networks.

Keywords: pressure reduce valve, complex networks, variable speed pump, wavelet transform, burst detection, CUSUM (Cumulative Sum), water pipeline monitoring

Procedia PDF Downloads 66
4297 Using Eye-Tracking Technology to Understand Consumers’ Comprehension of Multimedia Health Information

Authors: Samiullah Paracha, Sania Jehanzeb, M. H. Gharanai, A. R. Ahmadi, H.Sokout, Toshiro Takahara

Abstract:

The purpose of this study is to examine how health consumers utilize pictures when developing an understanding of multimedia health documents, and whether attentional processes, measured by eye-tracking, relate to differences in health-related cognitive resources and passage comprehension. To investigate these issues, we will present health-related text-picture passages to elders and collect eye movement data to measure readers’ looking behaviors.

Keywords: multimedia, eye-tracking, consumer health informatics, human-computer interaction

Procedia PDF Downloads 313
4296 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 160
4295 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface

Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto

Abstract:

Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.

Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns

Procedia PDF Downloads 108
4294 Effect of Double-Skin Facade Configuration on the Energy Performance of Office Building in Maritime Desert Climate

Authors: B. Umaru Mohammed, Faris A. Al-Maziad, Mohammad Y. Numan

Abstract:

One of the most important factors affecting the energy performance within a building is a carefully and efficiently designed facade. The primary aim of this research was to identify and present the potentiality of utilising Double-Skin Facade (DSF) construction and critically examine its effect on the energy consumption of an office building located within a maritime desert climate as to the conventional single-skin curtain wall system. A comparative analysis of the effect on the overall energy consumption within an office building was investigated in which a combination of various Double-Skin Facade configurations, systems, and cavity depths, glazing types and orientations were utilised. A computer dynamic modelling was utilised in order to ensure accurate calculations and efficient simulations of the various DSF systems due to the complex nature of the various functions within the Facade cavity. Through the use of the dynamic thermal modelling simulations, the best cavity size glazed type and orientation were determined to lead to a detailed analysis of the efficiency of each respective combination of Double-Skin Facade construction. As such the optimal facade combination for use within an office building located in a maritime desert climate was identified. Results demonstrated that a multi-story Facade, depending on its configuration, save up to 5% on annual cooling loads respect to a Corridor Facade and while vented can save unto 12% when compared to the single skin façade, on annual cooling load in the maritime desert climate. The selected configuration of the DSF from SSF saves an overall annual cooling load of 32%.A comparative analysis of the effect on the overall energy consumption within an office building was investigated in which a combination of various Double-Skin Facade configurations, systems, and cavity depths, glazing types and orientations were utilized. A computer dynamic modelling was utilized in order to ensure accurate calculations and efficient simulations of the various DSF systems due to the complex nature of the various functions within the Facade cavity. Through the use of the dynamic thermal modelling simulations, the best cavity size glazed type and orientation were determined to lead to a detailed analysis of the efficiency of each respective combination of Double-Skin Facade construction. As such the optimal facade combination for use within an office building located in a maritime desert climate was identified. Results demonstrated that a multi-story Facade, depending on its configuration, save up to 5% on annual cooling loads respect to a Corridor Facade and while vented can save unto 12% when compared to the single skin facade, on annual cooling load in the maritime desert climate. The selected configuration of the DSF from SSF saves an overall annual cooling load of 32%.

Keywords: computer dynamics modelling, comparative analysis, energy computation, double skin facade, single skin curtain wall, maritime desert climate

Procedia PDF Downloads 325
4293 An Inquiry on 2-Mass and Wheeled Mobile Robot Dynamics

Authors: Boguslaw Schreyer

Abstract:

In this paper, a general dynamical model is derived using the Lagrange formalism. The two masses: sprang and unsprang are included in a six-degree of freedom model for a sprung mass. The unsprung mass is included and shown only in a simplified model, although its equations have also been derived by an author. The simplified equations, more suitable for the computer model of robot’s dynamics are also shown.

Keywords: dynamics, mobile, robot, wheeled mobile robots

Procedia PDF Downloads 320
4292 Development of a Paediatric Head Model for the Computational Analysis of Head Impact Interactions

Authors: G. A. Khalid, M. D. Jones, R. Prabhu, A. Mason-Jones, W. Whittington, H. Bakhtiarydavijani, P. S. Theobald

Abstract:

Head injury in childhood is a common cause of death or permanent disability from injury. However, despite its frequency and significance, there is little understanding of how a child’s head responds during injurious loading. Whilst Infant Post Mortem Human Subject (PMHS) experimentation is a logical approach to understand injury biomechanics, it is the authors’ opinion that a lack of subject availability is hindering potential progress. Computer modelling adds great value when considering adult populations; however, its potential remains largely untapped for infant surrogates. The complexities of child growth and development, which result in age dependent changes in anatomy, geometry and physical response characteristics, present new challenges for computational simulation. Further geometric challenges are presented by the intricate infant cranial bones, which are separated by sutures and fontanelles and demonstrate a visible fibre orientation. This study presents an FE model of a newborn infant’s head, developed from high-resolution computer tomography scans, informed by published tissue material properties. To mimic the fibre orientation of immature cranial bone, anisotropic properties were applied to the FE cranial bone model, with elastic moduli representing the bone response both parallel and perpendicular to the fibre orientation. Biofiedility of the computational model was confirmed by global validation against published PMHS data, by replicating experimental impact tests with a series of computational simulations, in terms of head kinematic responses. Numerical results confirm that the FE head model’s mechanical response is in favourable agreement with the PMHS drop test results.

Keywords: finite element analysis, impact simulation, infant head trauma, material properties, post mortem human subjects

Procedia PDF Downloads 311
4291 Computation of Radiotherapy Treatment Plans Based on CT to ED Conversion Curves

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Radiotherapy treatment planning computers use CT data of the patient. For the computation of a treatment plan, treatment planning system must have an information on electron densities of tissues scanned by CT. This information is given by the conversion curve CT (CT number) to ED (electron density), or simply calibration curve. Every treatment planning system (TPS) has built in default CT to ED conversion curves, for the CTs of different manufacturers. However, it is always recommended to verify the CT to ED conversion curve before actual clinical use. Objective of this study was to check how the default curve already provided matches the curve actually measured on a specific CT, and how much it influences the calculation of a treatment planning computer. The examined CT scanners were from the same manufacturer, but four different scanners from three generations. The measurements of all calibration curves were done with the dedicated phantom CIRS 062M Electron Density Phantom. The phantom was scanned, and according to real HU values read at the CT console computer, CT to ED conversion curves were generated for different materials, for same tube voltage 140 kV. Another phantom, CIRS Thorax 002 LFC which represents an average human torso in proportion, density and two-dimensional structure, was used for verification. The treatment planning was done on CT slices of scanned CIRS LFC 002 phantom, for selected cases. Interest points were set in the lungs, and in the spinal cord, and doses recorded in TPS. The overall calculated treatment times for four scanners and default scanner did not differ more than 0.8%. Overall interest point dose in bone differed max 0.6% while for single fields was maximum 2.7% (lateral field). Overall interest point dose in lungs differed max 1.1% while for single fields was maximum 2.6% (lateral field). It is known that user should verify the CT to ED conversion curve, but often, developing countries are facing lack of QA equipment, and often use default data provided. We have concluded that the CT to ED curves obtained differ in certain points of a curve, generally in the region of higher densities. This influences the treatment planning result which is not significant, but definitely does make difference in the calculated dose.

Keywords: Computation of treatment plan, conversion curve, radiotherapy, electron density

Procedia PDF Downloads 457
4290 Seismic Behavior of Concrete Filled Steel Tube Reinforced Concrete Column

Authors: Raghabendra Yadav, Baochun Chen, Huihui Yuan, Zhibin Lian

Abstract:

Pseudo-dynamic test (PDT) method is an advanced seismic test method that combines loading technology with computer technology. Large-scale models or full scale seismic tests can be carried out by using this method. CFST-RC columns are used in civil engineering structures because of their better seismic performance. A CFST-RC column is composed of four CFST limbs which are connected with RC web in longitudinal direction and with steel tube in transverse direction. For this study, a CFST-RC pier is tested under Four different earthquake time histories having scaled PGA of 0.05g. From the experiment acceleration, velocity, displacement and load time histories are observed. The dynamic magnification factors for acceleration due to Elcentro, Chi-Chi, Imperial Valley and Kobe ground motions are observed as 15, 12, 17 and 14 respectively. The natural frequency of the pier is found to be 1.40 Hz. The result shows that this type of pier has excellent static and earthquake resistant properties.

Keywords: bridge pier, CFST-RC pier, pseudo dynamic test, seismic performance, time history

Procedia PDF Downloads 168
4289 A 3D Bioprinting System for Engineering Cell-Embedded Hydrogels by Digital Light Processing

Authors: Jimmy Jiun-Ming Su, Yuan-Min Lin

Abstract:

Bioprinting has been applied to produce 3D cellular constructs for tissue engineering. Microextrusion printing is the most common used method. However, printing low viscosity bioink is a challenge for this method. Herein, we developed a new 3D printing system to fabricate cell-laden hydrogels via a DLP-based projector. The bioprinter is assembled from affordable equipment including a stepper motor, screw, LED-based DLP projector, open source computer hardware and software. The system can use low viscosity and photo-polymerized bioink to fabricate 3D tissue mimics in a layer-by-layer manner. In this study, we used gelatin methylacrylate (GelMA) as bioink for stem cell encapsulation. In order to reinforce the printed construct, surface modified hydroxyapatite has been added in the bioink. We demonstrated the silanization of hydroxyapatite could improve the crosslinking between the interface of hydroxyapatite and GelMA. The results showed that the incorporation of silanized hydroxyapatite into the bioink had an enhancing effect on the mechanical properties of printed hydrogel, in addition, the hydrogel had low cytotoxicity and promoted the differentiation of embedded human bone marrow stem cells (hBMSCs) and retinal pigment epithelium (RPE) cells. Moreover, this bioprinting system has the ability to generate microchannels inside the engineered tissues to facilitate diffusion of nutrients. We believe this 3D bioprinting system has potential to fabricate various tissues for clinical applications and regenerative medicine in the future.

Keywords: bioprinting, cell encapsulation, digital light processing, GelMA hydrogel

Procedia PDF Downloads 158