Search results for: neural networks multi-layer perceptron
1589 Analysis of Waiting Time and Drivers Fatigue at Manual Toll Plaza and Suggestion of an Automated Toll Tax Collection System
Authors: Muhammad Dawood Idrees, Maria Hafeez, Arsalan Ansari
Abstract:
Toll tax collection is the earliest method of tax collection and revenue generation. This revenue is utilized for the development of roads networks, maintenance, and connecting to roads and highways across the country. Pakistan is one of the biggest countries, covers a wide area of land, roads networks, and motorways are important source of connecting cities. Every day millions of people use motorways, and they have to stop at toll plazas to pay toll tax as majority of toll plazas are manually collecting toll tax. The purpose of this study is to calculate the waiting time of vehicles at Karachi Hyderabad (M-9) motorway. As Karachi is the biggest city of Pakistan and hundreds of thousands of people use this route to approach other cities. Currently, toll tax collection is manual system which is a major cause for long time waiting at toll plaza. This study calculates the waiting time of vehicles, fuel consumed in waiting time, manpower employed at toll plaza as all process is manual, and it also leads to mental and physical fatigue of driver. All wastages of sources are also calculated, and a most feasible automatic toll tax collection system is proposed which is not only beneficial to reduce waiting time but also beneficial in reduction of fuel, reduction of manpower employed, and reduction in physical and mental fatigue. A cost comparison in terms of wastages is also shown between manual and automatic toll tax collection system (E-Z Pass). Results of this study reveal that, if automatic tool collection system is implemented at Karachi to Hyderabad motorway (M-9), there will be a significance reduction in waiting time of vehicles, which leads to reduction of fuel consumption, environmental pollution, mental and physical fatigue of driver. All these reductions are also calculated in terms of money (Pakistani rupees) and it is obtained that millions of rupees can be saved by using automatic tool collection system which will lead to improve the economy of country.Keywords: toll tax collection, waiting time, wastages, driver fatigue
Procedia PDF Downloads 1511588 DeClEx-Processing Pipeline for Tumor Classification
Authors: Gaurav Shinde, Sai Charan Gongiguntla, Prajwal Shirur, Ahmed Hambaba
Abstract:
Health issues are significantly increasing, putting a substantial strain on healthcare services. This has accelerated the integration of machine learning in healthcare, particularly following the COVID-19 pandemic. The utilization of machine learning in healthcare has grown significantly. We introduce DeClEx, a pipeline that ensures that data mirrors real-world settings by incorporating Gaussian noise and blur and employing autoencoders to learn intermediate feature representations. Subsequently, our convolutional neural network, paired with spatial attention, provides comparable accuracy to state-of-the-art pre-trained models while achieving a threefold improvement in training speed. Furthermore, we provide interpretable results using explainable AI techniques. We integrate denoising and deblurring, classification, and explainability in a single pipeline called DeClEx.Keywords: machine learning, healthcare, classification, explainability
Procedia PDF Downloads 551587 Transnational Initiatives, Local Perspectives: The Potential of Australia-Asia BRIDGE School Partnerships Project to Support Teacher Professional Development in India
Authors: Atiya Khan
Abstract:
Recent research on the condition of school education in India has reaffirmed the importance of quality teacher professional development, especially in light of the rapid changes in teaching methods, learning theories, curriculum, and major shifts in information and technology that education systems are experiencing around the world. However, the quality of programs of teacher professional development in India is often uneven, in some cases non-existing. The educational authorities in India have long recognized this and have developed a range of programs to assist in-service teacher education. But, these programs have been mostly inadequate at improving the quality of teachers in India. Policy literature and reports indicate that the unevenness of these programs and more generally the lack of quality teacher professional development in India are due to factors such as a large number of teachers, budgetary constraints, top-down decision making, teacher overload, lack of infrastructure, and little or no follow-up. The disparity between the government stated goals for quality teacher professional development in India and its inability to meet the learning needs of teachers suggests that new interventions are needed. The realization that globalization has brought about an increase in the social, cultural, political and economic interconnectedness between countries has also given rise to transnational opportunities for education systems, such as India’s, aiming to build their capacity to support teacher professional development. Moreover, new developments in communication technologies seem to present a plausible means of achieving high-quality professional development for teachers through the creation of social learning spaces, such as transnational learning networks. This case study investigates the potential of one such transnational learning network to support the quality of teacher professional development in India, namely the Australia-Asia BRIDGE School Partnerships Project. It explores the participation of some fifteen teachers and their principals from BRIDGE participating schools in Delhi region of India; focusing on their professional development expectations from the BRIDGE program and account for their experiences in the program, in order to determine the program’s potential for the professional development of teachers in this study.Keywords: case study, Australia-Asia BRIDGE Project, teacher professional development, transnational learning networks
Procedia PDF Downloads 2661586 Inter-Complex Dependence of Production Technique and Preforms Construction on the Failure Pattern of Multilayer Homo-Polymer Composites
Authors: Ashraf Nawaz Khan, R. Alagirusamy, Apurba Das, Puneet Mahajan
Abstract:
The thermoplastic-based fibre composites are acquiring a market sector of conventional as well as thermoset composites. However, replacing the thermoset with a thermoplastic composite has never been an easy task. The inherent high viscosity of thermoplastic resin reveals poor interface properties. In this work, a homo-polymer towpreg is produced through an electrostatic powder spray coating methodology. The produced flexible towpreg offers a low melt-flow distance during the consolidation of the laminate. The reduced melt-flow distance demonstrates a homogeneous fibre/matrix distribution (and low void content) on consolidation. The composite laminate has been fabricated with two manufacturing techniques such as conventional film stack (FS) and powder-coated (PC) technique. This helps in understanding the distinct response of produced laminates on applying load since the laminates produced through the two techniques are comprised of the same constituent fibre and matrix (constant fibre volume fraction). The changed behaviour is observed mainly due to the different fibre/matrix configurations within the laminate. The interface adhesion influences the load transfer between the fibre and matrix. Therefore, it influences the elastic, plastic, and failure patterns of the laminates. Moreover, the effect of preform geometries (plain weave and satin weave structure) are also studied for corresponding composite laminates in terms of various mechanical properties. The fracture analysis is carried out to study the effect of resin at the interlacement points through micro-CT analysis. The PC laminate reveals a considerably small matrix-rich and deficient zone in comparison to the FS laminate. The different load tensile, shear, fracture toughness, and drop weight impact test) is applied to the laminates, and corresponding damage behaviour is analysed in the successive stage of failure. The PC composite has shown superior mechanical properties in comparison to the FS composite. The damage that occurs in the laminate is captured through the SEM analysis to identify the prominent mode of failure, such as matrix cracking, fibre breakage, delamination, debonding, and other phenomena.Keywords: composite, damage, fibre, manufacturing
Procedia PDF Downloads 1371585 Application of an Artificial Neural Network to Determine the Risk of Malignant Tumors from the Images Resulting from the Asymmetry of Internal and External Thermograms of the Mammary Glands
Authors: Amdy Moustapha Drame, Ilya V. Germashev, E. A. Markushevskaya
Abstract:
Among the main problems of medicine is breast cancer, from which a significant number of women around the world are constantly dying. Therefore, the detection of malignant breast tumors is an urgent task. For many years, various technologies for detecting these tumors have been used, in particular, in thermal imaging in order to determine different levels of breast cancer development. These periodic screening methods are a diagnostic tool for women and may have become an alternative to older methods such as mammography. This article proposes a model for the identification of malignant neoplasms of the mammary glands by the asymmetry of internal and external thermal imaging fields.Keywords: asymmetry, breast cancer, tumors, deep learning, thermogram, convolutional transformation, classification
Procedia PDF Downloads 601584 A Bottom-Up Approach for the Synthesis of Highly Ordered Fullerene-Intercalated Graphene Hybrids
Authors: A. Kouloumpis, P. Zygouri, G. Potsi, K. Spyrou, D. Gournis
Abstract:
Much of the research effort on graphene focuses on its use as building block for the development of new hybrid nanostructures with well-defined dimensions and behavior suitable for applications among else in gas storage, heterogeneous catalysis, gas/liquid separations, nanosensing and biology. Towards this aim, here we describe a new bottom-up approach, which combines the self-assembly with the Langmuir Schaefer technique, for the production of fullerene-intercalated graphene hybrid materials. This new method uses graphene nanosheets as a template for the grafting of various fullerene C60 molecules (pure C60, bromo-fullerenes, C60Br24, and fullerols, C60(OH)24) in a bi-dimensional array, and allows for perfect layer-by-layer growth with control at the molecular level. Our film preparation approach involves a bottom-up layer-by-layer process that includes the formation of a hybrid organo-graphene Langmuir film hosting fullerene molecules within its interlayer spacing. A dilute water solution of chemically oxidized graphene (GO) was used as subphase on the Langmuir-Blodgett deposition system while an appropriate amino surfactant (that binds covalently with the GO) was applied for the formation of hybridized organo-GO. After the horizontal lift of a hydrophobic substrate, a surface modification of the GO platelets was performed by bringing the surface of the transferred Langmuir film in contact with a second amino surfactant solution (capable to interact strongly with the fullerene derivatives). In the final step, the hybrid organo-graphene film was lowered in the solution of the appropriate fullerene derivative. Multilayer films were constructed by repeating this procedure. Hybrid fullerene-based thin films deposited on various hydrophobic substrates were characterized by X-ray diffraction (XRD) and X-ray reflectivity (XRR), FTIR, and Raman spectroscopies, Atomic Force Microscopy, and optical measurements. Acknowledgments. This research has been co‐financed by the European Union (European Social Fund – ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF)‐Research Funding Program: THALES. Investing in knowledge society through the European Social Fund (no. 377285).Keywords: hybrids, graphene oxide, fullerenes, langmuir-blodgett, intercalated structures
Procedia PDF Downloads 3271583 Human Brain Organoids-on-a-Chip Systems to Model Neuroinflammation
Authors: Feng Guo
Abstract:
Human brain organoids, 3D brain tissue cultures derived from human pluripotent stem cells, hold promising potential in modeling neuroinflammation for a variety of neurological diseases. However, challenges remain in generating standardized human brain organoids that can recapitulate key physiological features of a human brain. Here, this study presents a series of organoids-on-a-chip systems to generate better human brain organoids and model neuroinflammation. By employing 3D printing and microfluidic 3D cell culture technologies, the study’s systems enable the reliable, scalable, and reproducible generation of human brain organoids. Compared with conventional protocols, this study’s method increased neural progenitor proliferation and reduced heterogeneity of human brain organoids. As a proof-of-concept application, the study applied this method to model substance use disorders.Keywords: human brain organoids, microfluidics, organ-on-a-chip, neuroinflammation
Procedia PDF Downloads 2021582 Automatic Content Curation of Visual Heritage
Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz
Abstract:
Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research
Procedia PDF Downloads 1841581 The Role of Oral and Intestinal Microbiota in European Badgers
Authors: Emma J. Dale, Christina D. Buesching, Kevin R. Theis, David W. Macdonald
Abstract:
This study investigates the oral and intestinal microbiomes of wild-living European badgers (Meles meles) and will relate inter-individual differences to social contact networks, somatic and reproductive fitness, varying susceptibility to bovine tuberculous (bTB) and to the olfactory advertisement. Badgers are an interesting model for this research, as they have great variation in body condition, despite living in complex social networks and having access to the same resources. This variation in somatic fitness, in turn, affects breeding success, particularly in females. We postulate that microbiota have a central role to play in determining the successfulness of an individual. Our preliminary results, characterising the microbiota of individual badgers, indicate unique compositions of microbiota communities within social groups of badgers. This basal information will inform further questions related to the extent microbiota influence fitness. Hitherto, the potential role of microbiota has not been considered in determining host condition, but also other key fitness variables, namely; communication and resistance to disease. Badgers deposit their faeces in communal latrines, which play an important role in olfactory communication. Odour profiles of anal and subcaudal gland secretions are highly individual-specific and encode information about group-membership and fitness-relevant parameters, and their chemical composition is strongly dependent on symbiotic microbiota. As badgers sniff/ lick (using their Vomeronasal organ) and over-mark faecal deposits of conspecifics, these microbial communities can be expected to vary with social contact networks. However, this is particularly important in the context of bTB, where badgers are assumed to transmit bTB to cattle as well as conspecifics. Interestingly, we have found that some individuals are more susceptible to bTB than are others. As acquired immunity and thus potential susceptibility to infectious diseases are known to depend also on symbiotic microbiota in other members of the mustelids, a role of particularly oral microbiota can currently not be ruled out as a potential explanation for inter-individual differences in infection susceptibility of bTB in badgers. Tri annually badgers are caught in the context of a long-term population study that began in 1987. As all badgers receive an individual tattoo upon first capture, age, natal as well as previous and current social group-membership and other life history parameters are known for all animals. Swabs (subcaudal ‘scent gland’, anal, genital, nose, mouth and ear) and fecal samples will be taken from all individuals, stored at -80oC until processing. Microbial samples will be processed and identified at Wayne State University’s Theis (Host-Microbe Interactions) Lab, using High Throughput Sequencing (16S rRNA-encoding gene amplification and sequencing). Acknowledgments: Gas-Chromatography/ Mass-spectrometry (in the context of olfactory communication) analyses will be performed through an established collaboration with Dr. Veronica Tinnesand at Telemark University, Norway.Keywords: communication, energetics, fitness, free-ranging animals, immunology
Procedia PDF Downloads 1871580 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 1261579 Automatic Method for Classification of Informative and Noninformative Images in Colonoscopy Video
Authors: Nidhal K. Azawi, John M. Gauch
Abstract:
Colorectal cancer is one of the leading causes of cancer death in the US and the world, which is why millions of colonoscopy examinations are performed annually. Unfortunately, noise, specular highlights, and motion artifacts corrupt many images in a typical colonoscopy exam. The goal of our research is to produce automated techniques to detect and correct or remove these noninformative images from colonoscopy videos, so physicians can focus their attention on informative images. In this research, we first automatically extract features from images. Then we use machine learning and deep neural network to classify colonoscopy images as either informative or noninformative. Our results show that we achieve image classification accuracy between 92-98%. We also show how the removal of noninformative images together with image alignment can aid in the creation of image panoramas and other visualizations of colonoscopy images.Keywords: colonoscopy classification, feature extraction, image alignment, machine learning
Procedia PDF Downloads 2531578 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning
Authors: Hossein Havaeji, Tony Wong, Thien-My Dao
Abstract:
1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning
Procedia PDF Downloads 1221577 Building a Blockchain-based Internet of Things
Authors: Rob van den Dam
Abstract:
Today’s Internet of Things (IoT) comprises more than a billion intelligent devices, connected via wired/wireless communications. The expected proliferation of hundreds of billions more places us at the threshold of a transformation sweeping across the communications industry. Yet, we found that the IoT architecture and solutions that currently work for billions of devices won’t necessarily scale to tomorrow’s hundreds of billions of devices because of high cost, lack of privacy, not future-proof, lack of functional value and broken business models. As the IoT scales exponentially, decentralized networks have the potential to reduce infrastructure and maintenance costs to manufacturers. Decentralization also promises increased robustness by removing single points of failure that could exist in traditional centralized networks. By shifting the power in the network from the center to the edges, devices gain greater autonomy and can become points of transactions and economic value creation for owners and users. To validate the underlying technology vision, IBM jointly developed with Samsung Electronics the autonomous decentralized peer-to- peer proof-of-concept (PoC). The primary objective of this PoC was to establish a foundation on which to demonstrate several capabilities that are fundamental to building a decentralized IoT. Though many commercial systems in the future will exist as hybrid centralized-decentralized models, the PoC demonstrated a fully distributed proof. The PoC (a) validated the future vision for decentralized systems to extensively augment today’s centralized solutions, (b) demonstrated foundational IoT tasks without the use of centralized control, (c) proved that empowered devices can engage autonomously in marketplace transactions. The PoC opens the door for the communications and electronics industry to further explore the challenges and opportunities of potential hybrid models that can address the complexity and variety of requirements posed by the internet that continues to scale. Contents: (a) The new approach for an IoT that will be secure and scalable, (b) The three foundational technologies that are key for the future IoT, (c) The related business models and user experiences, (d) How such an IoT will create an 'Economy of Things', (e) The role of users, devices, and industries in the IoT future, (f) The winners in the IoT economy.Keywords: IoT, internet, wired, wireless
Procedia PDF Downloads 3361576 An Approach towards Smart Future: Ict Infrastructure Integrated into Urban Water Networks
Authors: Ahsan Ali, Mayank Ostwal, Nikhil Agarwal
Abstract:
Abstract—According to a World Bank report, millions of people across the globe still do not have access to improved water services. With uninterrupted growth of cities and urban inhabitants, there is a mounting need to safeguard the sustainable expansion of cities. Efficient functioning of the urban components and high living standards of the residents are needed to be ensured. The water and sanitation network of an urban development is one of its most essential parts of its critical infrastructure. The growth in urban population is leading towards increased water demand, and thus, the local water resources are severely strained. 'Smart water' is referred to water and waste water infrastructure that is able to manage the limited resources and the energy used to transport it. It enables the sustainable consumption of water resources through co-ordinate water management system, by integrating Information Communication Technology (ICT) solutions, intended at maximizing the socioeconomic benefits without compromising the environmental values. This paper presents a case study from a medium sized city in North-western Pakistan. Currently, water is getting contaminated due to the proximity between water and sewer pipelines in the study area, leading to public health issues. Due to unsafe grey water infiltration, the scarce ground water is also getting polluted. This research takes into account the design of smart urban water network by integrating ICT (Information and Communication Technology) with urban water network. The proximity between the existing water supply network and sewage network is analyzed and a design of new water supply system is proposed. Real time mapping of the existing urban utility networks will be projected with the help of GIS applications. The issue of grey water infiltration is addressed by providing sustainable solutions with the help of locally available materials, keeping in mind the economic condition of the area. To deal with the current growth of urban population, it is vital to develop new water resources. Hence, distinctive and cost effective procedures to harness rain water would be suggested as a part of the research study experiment.Keywords: GIS, smart water, sustainability, urban water management
Procedia PDF Downloads 2161575 A Semantic E-Learning and E-Assessment System of Learners
Authors: Wiem Ben Khalifa, Dalila Souilem, Mahmoud Neji
Abstract:
The evolutions of Social Web and Semantic Web lead us to ask ourselves about the way of supporting the personalization of learning by means of intelligent filtering of educational resources published in the digital networks. We recommend personalized courses of learning articulated around a first educational course defined upstream. Resuming the context and the stakes in the personalization, we also suggest anchoring the personalization of learning in a community of interest within a group of learners enrolled in the same training. This reflection is supported by the display of an active and semantic system of learning dedicated to the constitution of personalized to measure courses and in the due time.Keywords: Semantic Web, semantic system, ontology, evaluation, e-learning
Procedia PDF Downloads 3341574 Design of Bidirectional Wavelength Division Multiplexing Passive Optical Network in Optisystem Environment
Authors: Ashiq Hussain, Mahwash Hussain, Zeenat Parveen
Abstract:
Now a days the demand for broadband service has increased. Due to which the researchers are trying to find a solution to provide a large amount of service. There is a shortage of bandwidth because of the use of downloading video, voice and data. One of the solutions to overcome this shortage of bandwidth is to provide the communication system with passive optical components. We have increased the data rate in this system. From experimental results we have concluded that the quality factor has increased by adding passive optical networks.Keywords: WDM-PON, optical fiber, BER, Q-factor, eye diagram
Procedia PDF Downloads 5091573 Improving the Dissolution Rate of Folic Acid via the Antisolvent Vapour Precipitation
Authors: J. Y. Tan, L. C. Lum, M. G. Lee, S. Mansouri, K. Hapgood, X. D. Chen, M. W. Woo
Abstract:
Folic acid (FA) is known to be an important supplement to prevent neural tube defect (NTD) in pregnant women. Similar to some commercial formulations, sodium bicarbonate solution is used as a solvent for FA. This work uses the antisolvent vapor precipitation (AVP), incorporating ethanol vapor as the convective drying medium in place of air to produce branch-like micro-structure FA particles. Interestingly, the dissolution rate of the resultant particle is 2-3 times better than the particle produce from conventional air drying due to the higher surface area of particles produced. The higher dissolution rate could possibly improve the delivery and absorption of FA in human body. This application could potentially be extended to other commercial products, particularly in less soluble drugs to improve its solubility.Keywords: absorption, antisolvent vapor precipitation, dissolution rate, folic acid
Procedia PDF Downloads 4451572 Stock Price Prediction Using Time Series Algorithms
Authors: Sumit Sen, Sohan Khedekar, Umang Shinde, Shivam Bhargava
Abstract:
This study has been undertaken to investigate whether the deep learning models are able to predict the future stock prices by training the model with the historical stock price data. Since this work required time series analysis, various models are present today to perform time series analysis such as Recurrent Neural Network LSTM, ARIMA and Facebook Prophet. Applying these models the movement of stock price of stocks are predicted and also tried to provide the future prediction of the stock price of a stock. Final product will be a stock price prediction web application that is developed for providing the user the ease of analysis of the stocks and will also provide the predicted stock price for the next seven days.Keywords: Autoregressive Integrated Moving Average, Deep Learning, Long Short Term Memory, Time-series
Procedia PDF Downloads 1411571 Examining Social Connectivity through Email Network Analysis: Study of Librarians' Emailing Groups in Pakistan
Authors: Muhammad Arif Khan, Haroon Idrees, Imran Aziz, Sidra Mushtaq
Abstract:
Social platforms like online discussion and mailing groups are well aligned with academic as well as professional learning spaces. Professional communities are increasingly moving to online forums for sharing and capturing the intellectual abilities. This study investigated dynamics of social connectivity of yahoo mailing groups of Pakistani Library and Information Science (LIS) professionals using Graph Theory technique. Design/Methodology: Social Network Analysis is the increasingly concerned domain for scientists in identifying whether people grow together through online social interaction or, whether they just reflect connectivity. We have conducted a longitudinal study using Network Graph Theory technique to analyze the large data-set of email communication. The data was collected from three yahoo mailing groups using network analysis software over a period of six months i.e. January to June 2016. Findings of the network analysis were reviewed through focus group discussion with LIS experts and selected respondents of the study. Data were analyzed in Microsoft Excel and network diagrams were visualized using NodeXL and ORA-Net Scene package. Findings: Findings demonstrate that professionals and students exhibit intellectual growth the more they get tied within a network by interacting and participating in communication through online forums. The study reports on dynamics of the large network by visualizing the email correspondence among group members in a network consisting vertices (members) and edges (randomized correspondence). The model pair wise relationship between group members was illustrated to show characteristics, reasons, and strength of ties. Connectivity of nodes illustrated the frequency of communication among group members through examining node coupling, diffusion of networks, and node clustering has been demonstrated in-depth. Network analysis was found to be a useful technique in investigating the dynamics of the large network.Keywords: emailing networks, network graph theory, online social platforms, yahoo mailing groups
Procedia PDF Downloads 2391570 Distributed Key Management With Less Transmitted Messaged In Rekeying Process To Secure Iot Wireless Sensor Networks In Smart-Agro
Authors: Safwan Mawlood Hussien
Abstract:
Internet of Things (IoT) is a promising technology has received considerable attention in different fields such as health, industry, defence, and agro, etc. Due to the limitation capacity of computing, storage, and communication, IoT objects are more vulnerable to attacks. Many solutions have been proposed to solve security issues, such as key management using symmetric-key ciphers. This study provides a scalable group distribution key management based on ECcryptography; with less transmitted messages The method has been validated through simulations in OMNeT++.Keywords: elliptic curves, Diffie–Hellman, discrete logarithm problem, secure key exchange, WSN security, IoT security, smart-agro
Procedia PDF Downloads 1191569 Cluster Based Ant Colony Routing Algorithm for Mobile Ad-Hoc Networks
Authors: Alaa Eddien Abdallah, Bajes Yousef Alskarnah
Abstract:
Ant colony based routing algorithms are known to grantee the packet delivery, but they suffer from the huge overhead of control messages which are needed to discover the route. In this paper we utilize the network nodes positions to group the nodes in connected clusters. We use clusters-heads only on forwarding the route discovery control messages. Our simulations proved that the new algorithm has decreased the overhead dramatically without affecting the delivery rate.Keywords: ad-hoc network, MANET, ant colony routing, position based routing
Procedia PDF Downloads 4251568 Enhancing Early Detection of Coronary Heart Disease Through Cloud-Based AI and Novel Simulation Techniques
Authors: Md. Abu Sufian, Robiqul Islam, Imam Hossain Shajid, Mahesh Hanumanthu, Jarasree Varadarajan, Md. Sipon Miah, Mingbo Niu
Abstract:
Coronary Heart Disease (CHD) remains a principal cause of global morbidity and mortality, characterized by atherosclerosis—the build-up of fatty deposits inside the arteries. The study introduces an innovative methodology that leverages cloud-based platforms like AWS Live Streaming and Artificial Intelligence (AI) to early detect and prevent CHD symptoms in web applications. By employing novel simulation processes and AI algorithms, this research aims to significantly mitigate the health and societal impacts of CHD. Methodology: This study introduces a novel simulation process alongside a multi-phased model development strategy. Initially, health-related data, including heart rate variability, blood pressure, lipid profiles, and ECG readings, were collected through user interactions with web-based applications as well as API Integration. The novel simulation process involved creating synthetic datasets that mimic early-stage CHD symptoms, allowing for the refinement and training of AI algorithms under controlled conditions without compromising patient privacy. AWS Live Streaming was utilized to capture real-time health data, which was then processed and analysed using advanced AI techniques. The novel aspect of our methodology lies in the simulation of CHD symptom progression, which provides a dynamic training environment for our AI models enhancing their predictive accuracy and robustness. Model Development: it developed a machine learning model trained on both real and simulated datasets. Incorporating a variety of algorithms including neural networks and ensemble learning model to identify early signs of CHD. The model's continuous learning mechanism allows it to evolve adapting to new data inputs and improving its predictive performance over time. Results and Findings: The deployment of our model yielded promising results. In the validation phase, it achieved an accuracy of 92% in predicting early CHD symptoms surpassing existing models. The precision and recall metrics stood at 89% and 91% respectively, indicating a high level of reliability in identifying at-risk individuals. These results underscore the effectiveness of combining live data streaming with AI in the early detection of CHD. Societal Implications: The implementation of cloud-based AI for CHD symptom detection represents a significant step forward in preventive healthcare. By facilitating early intervention, this approach has the potential to reduce the incidence of CHD-related complications, decrease healthcare costs, and improve patient outcomes. Moreover, the accessibility and scalability of cloud-based solutions democratize advanced health monitoring, making it available to a broader population. This study illustrates the transformative potential of integrating technology and healthcare, setting a new standard for the early detection and management of chronic diseases.Keywords: coronary heart disease, cloud-based ai, machine learning, novel simulation techniques, early detection, preventive healthcare
Procedia PDF Downloads 641567 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling
Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé
Abstract:
Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation
Procedia PDF Downloads 801566 Facial Recognition on the Basis of Facial Fragments
Authors: Tetyana Baydyk, Ernst Kussul, Sandra Bonilla Meza
Abstract:
There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild) face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face.Keywords: face recognition, labeled faces in the wild (LFW) database, random local descriptor (RLD), random features
Procedia PDF Downloads 3601565 Terrain Classification for Ground Robots Based on Acoustic Features
Authors: Bernd Kiefer, Abraham Gebru Tesfay, Dietrich Klakow
Abstract:
The motivation of our work is to detect different terrain types traversed by a robot based on acoustic data from the robot-terrain interaction. Different acoustic features and classifiers were investigated, such as Mel-frequency cepstral coefficient and Gamma-tone frequency cepstral coefficient for the feature extraction, and Gaussian mixture model and Feed forward neural network for the classification. We analyze the system’s performance by comparing our proposed techniques with some other features surveyed from distinct related works. We achieve precision and recall values between 87% and 100% per class, and an average accuracy at 95.2%. We also study the effect of varying audio chunk size in the application phase of the models and find only a mild impact on performance.Keywords: acoustic features, autonomous robots, feature extraction, terrain classification
Procedia PDF Downloads 3691564 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost
Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku
Abstract:
Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost
Procedia PDF Downloads 1111563 IT System in the Food Supply Chain Safety, Application in SMEs Sector
Authors: Mohsen Shirani, Micaela Demichela
Abstract:
Food supply chain is one of the most complex supply chain networks due to its perishable nature and customer oriented products, and food safety is the major concern for this industry. IT system could help to minimize the production and consumption of unsafe food by controlling and monitoring the entire system. However, there have been many issues in adoption of IT system in this industry specifically within SMEs sector. With this regard, this study presents a novel approach to use IT and tractability systems in the food supply chain, using application of RFID and central database.Keywords: food supply chain, IT system, safety, SME
Procedia PDF Downloads 4771562 The Role of Leisure in Older Adults Transitioning to New Homes
Authors: Kristin Prentice, Carri Hand
Abstract:
As the Canadian population ages and chronic health conditions continue to escalate, older adults will require various types of housing, such as long term care or retirement homes. Moving to a new home may require a change in leisure activities and social networks, which could be challenging to maintain identity and create a sense of home. Leisure has been known to help older adults maintain or increase their quality of life and life satisfaction and may help older adults in moving to new homes. Sense of home and identity within older adults' transitions to new homes are concepts that may also relate to leisure engagement. Literature is scant regarding the role of leisure in older adults moving to new homes and how the sense of home and identity inter-relate. This study aims to explore how leisure may play a role in older adults' transitioning to new homes, including how sense of home and identity inter-relate. An ethnographic approach will be used to understand the culture of older adults transitioning to new homes. This study will involve older adults who have recently relocated to a mid-sized city in Ontario, Canada. The study will focus on the older adult’s interactions with and connections to their home environment through leisure. Data collection will take place via video-conferencing and will include a narrative interview and two other interviews to discuss an activity diary of leisure engagement pre and post move and mental maps to capture spaces where participants engaged in leisure. Participants will be encouraged to share photographs of leisure engagement taken inside and outside their home to help understand the social spaces the participants refer to in their activity diaries and mental maps. Older adults attempt to adjust to their new homes by maintaining their identity, developing a sense of home through creating attachment to place, and maintaining social networks, all of which have been linked to engaging in leisure. This research will provide insight into the role of leisure in this transition process and the extent that the home and community can contribute to aiding their transition to the new home. This research will contribute to existing literature on the inter-relationships of leisure, sense of home, and identity and how they relate to older adults moving to new homes. This research also has potential for influencing policy and practice for meeting the housing needs of older adults.Keywords: leisure, older adults, transition, identity
Procedia PDF Downloads 1201561 Fault Location Detection in Active Distribution System
Authors: R. Rezaeipour, A. R. Mehrabi
Abstract:
Recent increase of the DGs and microgrids in distribution systems, disturbs the tradition structure of the system. Coordination between protection devices in such a system becomes the concern of the network operators. This paper presents a new method for fault location detection in the active distribution networks, independent of the fault type or its resistance. The method uses synchronized voltage and current measurements at the interconnection of DG units and is able to adapt to changes in the topology of the system. The method has been tested on a 38-bus distribution system, with very encouraging results.Keywords: fault location detection, active distribution system, micro grids, network operators
Procedia PDF Downloads 7891560 Smart Model with the DEMATEL and ANFIS Multistage to Assess the Value of the Brand
Authors: Hamed Saremi
Abstract:
One of the challenges in manufacturing and service companies to provide a product or service is recognized Brand to consumers in target markets. They provide most of their processes under the same capacity. But the constant threat of devastating internal and external resources to prevent a rise Brands and more companies are recognizing the stages are bankrupt. This paper has tried to identify and analyze effective indicators of brand equity and focuses on indicators and presents a model of intelligent create a model to prevent possible damage. In this study identified indicators of brand equity based on literature study and according to expert opinions, set of indicators By techniques DEMATEL Then to used Multi-Step Adaptive Neural-Fuzzy Inference system (ANFIS) to design a multi-stage intelligent system for assessment of brand equity.Keywords: anfis, dematel, brand, cosmetic product, brand value
Procedia PDF Downloads 409