Search results for: lean tools and techniques
7830 Optimal Operation of a Photovoltaic Induction Motor Drive Water Pumping System
Authors: Nelson K. Lujara
Abstract:
The performance characteristics of a photovoltaic induction motor drive water pumping system with and without maximum power tracker is analyzed and presented. The analysis is done through determination and assessment of critical loss components in the system using computer aided design (CAD) tools for optimal operation of the system. The results can be used to formulate a well-calibrated computer aided design package of photovoltaic water pumping systems based on the induction motor drive. The results allow the design engineer to pre-determine the flow rate and efficiency of the system to suit particular application.Keywords: photovoltaic, water pumping, losses, induction motor
Procedia PDF Downloads 3047829 Thermal Runaway Vehicle Level Investigation and Protection
Authors: Gizem Batman, Mehmet Bora Küçükalpelli, Cenk Di̇nç
Abstract:
Nowadays, electric trucks are anticipated to become much more prevalent in the foreseeable future. However, the necessity to investigate the occurrence of thermal runaway phenomenon in the batteries has arisen, and the safety concerns are supported by past events. This article addresses the phenomenon of battery thermal runaway and examines the implications at the vehicle level. Different battery thermal runaway scenarios are evaluated by giving priority to the components that affect customer safety and customer degree evaluation with CAE tools, regulations, related tests. This evaluation aims to support the efforts of the trucking industry to attain safer, greener, more sustainable, and more effective energy storage solutions.Keywords: thermal runaway, EV truck, heat protection, battery
Procedia PDF Downloads 227828 A Hybrid Multi-Criteria Hotel Recommender System Using Explicit and Implicit Feedbacks
Authors: Ashkan Ebadi, Adam Krzyzak
Abstract:
Recommender systems, also known as recommender engines, have become an important research area and are now being applied in various fields. In addition, the techniques behind the recommender systems have been improved over the time. In general, such systems help users to find their required products or services (e.g. books, music) through analyzing and aggregating other users’ activities and behavior, mainly in form of reviews, and making the best recommendations. The recommendations can facilitate user’s decision making process. Despite the wide literature on the topic, using multiple data sources of different types as the input has not been widely studied. Recommender systems can benefit from the high availability of digital data to collect the input data of different types which implicitly or explicitly help the system to improve its accuracy. Moreover, most of the existing research in this area is based on single rating measures in which a single rating is used to link users to items. This paper proposes a highly accurate hotel recommender system, implemented in various layers. Using multi-aspect rating system and benefitting from large-scale data of different types, the recommender system suggests hotels that are personalized and tailored for the given user. The system employs natural language processing and topic modelling techniques to assess the sentiment of the users’ reviews and extract implicit features. The entire recommender engine contains multiple sub-systems, namely users clustering, matrix factorization module, and hybrid recommender system. Each sub-system contributes to the final composite set of recommendations through covering a specific aspect of the problem. The accuracy of the proposed recommender system has been tested intensively where the results confirm the high performance of the system.Keywords: tourism, hotel recommender system, hybrid, implicit features
Procedia PDF Downloads 2777827 Comparati̇ve Study of Pi̇xel and Object-Based Image Classificati̇on Techni̇ques for Extracti̇on of Land Use/Land Cover Informati̇on
Authors: Mahesh Kumar Jat, Manisha Choudhary
Abstract:
Rapid population and economic growth resulted in changes in large-scale land use land cover (LULC) changes. Changes in the biophysical properties of the Earth's surface and its impact on climate are of primary concern nowadays. Different approaches, ranging from location-based relationships or modelling earth surface - atmospheric interaction through modelling techniques like surface energy balance (SEB) have been used in the recent past to examine the relationship between changes in Earth surface land cover and climatic characteristics like temperature and precipitation. A remote sensing-based model i.e., Surface Energy Balance Algorithm for Land (SEBAL), has been used to estimate the surface heat fluxes over Mahi Bajaj Sagar catchment (India) from 2001 to 2020. Landsat ETM and OLI satellite data are used to model the SEB of the area. Changes in observed precipitation and temperature, obtained from India Meteorological Department (IMD) have been correlated with changes in surface heat fluxes to understand the relative contributions of LULC change in changing these climatic variables. Results indicate a noticeable impact of LULC changes on climatic variables, which are aligned with respective changes in SEB components. Results suggest that precipitation increases at a rate of 20 mm/year. The maximum and minimum temperature decreases and increases at 0.007 ℃ /year and 0.02 ℃ /year, respectively. The average temperature increases at 0.009 ℃ /year. Changes in latent heat flux and sensible heat flux positively correlate with precipitation and temperature, respectively. Variation in surface heat fluxes influences the climate parameters and is an adequate reason for climate change. So, SEB modelling is helpful to understand the LULC change and its impact on climate.Keywords: remote sensing, GIS, object based, classification
Procedia PDF Downloads 1377826 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data
Authors: K. Sathishkumar, V. Thiagarasu
Abstract:
Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.Keywords: microarray technology, gene expression data, clustering, gene Selection
Procedia PDF Downloads 3277825 Waters Colloidal Phase Extraction and Preconcentration: Method Comparison
Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes
Abstract:
Colloids are ubiquitous in the environment and are known to play a major role in enhancing the transport of trace elements, thus being an important vector for contaminants dispersion. Colloids study and characterization are necessary to improve our understanding of the fate of pollutants in the environment. However, in stream water and groundwater, colloids are often very poorly concentrated. It is therefore necessary to pre-concentrate colloids in order to get enough material for analysis, while preserving their initial structure. Many techniques are used to extract and/or pre-concentrate the colloidal phase from bulk aqueous phase, but yet there is neither reference method nor estimation of the impact of these different techniques on the colloids structure, as well as the bias introduced by the separation method. In the present work, we have tested and compared several methods of colloidal phase extraction/pre-concentration, and their impact on colloids properties, particularly their size distribution and their elementary composition. Ultrafiltration methods (frontal, tangential and centrifugal) have been considered since they are widely used for the extraction of colloids in natural waters. To compare these methods, a ‘synthetic groundwater’ was used as a reference. The size distribution (obtained by Field-Flow Fractionation (FFF)) and the chemical composition of the colloidal phase (obtained by Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Total Organic Carbon analysis (TOC)) were chosen as comparison factors. In this way, it is possible to estimate the pre-concentration impact on the colloidal phase preservation. It appears that some of these methods preserve in a more efficient manner the colloidal phase composition while others are easier/faster to use. The choice of the extraction/pre-concentration method is therefore a compromise between efficiency (including speed and ease of use) and impact on the structural and chemical composition of the colloidal phase. In perspective, the use of these methods should enhance the consideration of colloidal phase in the transport of pollutants in environmental assessment studies and forensics.Keywords: chemical composition, colloids, extraction, preconcentration methods, size distribution
Procedia PDF Downloads 2207824 Power Quality Modeling Using Recognition Learning Methods for Waveform Disturbances
Authors: Sang-Keun Moon, Hong-Rok Lim, Jin-O Kim
Abstract:
This paper presents a Power Quality (PQ) modeling and filtering processes for the distribution system disturbances using recognition learning methods. Typical PQ waveforms with mathematical applications and gathered field data are applied to the proposed models. The objective of this paper is analyzing PQ data with respect to monitoring, discriminating, and evaluating the waveform of power disturbances to ensure the system preventative system failure protections and complex system problem estimations. Examined signal filtering techniques are used for the field waveform noises and feature extractions. Using extraction and learning classification techniques, the efficiency was verified for the recognition of the PQ disturbances with focusing on interactive modeling methods in this paper. The waveform of selected 8 disturbances is modeled with randomized parameters of IEEE 1159 PQ ranges. The range, parameters, and weights are updated regarding field waveform obtained. Along with voltages, currents have same process to obtain the waveform features as the voltage apart from some of ratings and filters. Changing loads are causing the distortion in the voltage waveform due to the drawing of the different patterns of current variation. In the conclusion, PQ disturbances in the voltage and current waveforms indicate different types of patterns of variations and disturbance, and a modified technique based on the symmetrical components in time domain was proposed in this paper for the PQ disturbances detection and then classification. Our method is based on the fact that obtained waveforms from suggested trigger conditions contain potential information for abnormality detections. The extracted features are sequentially applied to estimation and recognition learning modules for further studies.Keywords: power quality recognition, PQ modeling, waveform feature extraction, disturbance trigger condition, PQ signal filtering
Procedia PDF Downloads 1907823 A Survey of Novel Opportunistic Routing Protocols in Mobile Ad Hoc Networks
Authors: R. Poonkuzhali, M. Y. Sanavullah, M. R. Gurupriya
Abstract:
Opportunistic routing is used, where the network has the features like dynamic topology changes and intermittent network connectivity. In Delay Tolerant network or Disruption tolerant network opportunistic forwarding technique is widely used. The key idea of opportunistic routing is selecting forwarding nodes to forward data and coordination among these nodes to avoid duplicate transmissions. This paper gives the analysis of pros and cons of various opportunistic routing techniques used in MANET.Keywords: ETX, opportunistic routing, PSR, throughput
Procedia PDF Downloads 4977822 Experimental Evaluation of UDP in Wireless LAN
Authors: Omar Imhemed Alramli
Abstract:
As Transmission Control Protocol (TCP), User Datagram Protocol (UDP) is transfer protocol in the transportation layer in Open Systems Interconnection model (OSI model) or in TCP/IP model of networks. The UDP aspects evaluation were not recognized by using the pcattcp tool on the windows operating system platform like TCP. The study has been carried out to find a tool which supports UDP aspects evolution. After the information collection about different tools, iperf tool was chosen and implemented on Cygwin tool which is installed on both Windows XP platform and also on Windows XP on virtual box machine on one computer only. Iperf is used to make experimental evaluation of UDP and to see what will happen during the sending the packets between the Host and Guest in wired and wireless networks. Many test scenarios have been done and the major UDP aspects such as jitter, packet losses, and throughput are evaluated.Keywords: TCP, UDP, IPERF, wireless LAN
Procedia PDF Downloads 3587821 Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration
Authors: S. J. Addinell, A. F. Grabsch, P. D. Fawell, B. Evans
Abstract:
The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration.Keywords: cuttings, dewatering, flocculation, percussive drilling, solids control
Procedia PDF Downloads 2527820 Economic Characteristics of Bitcoin: "An Analytical Study"
Authors: Abdelhalem Shahen
Abstract:
The world is now experiencing a digital revolution and greatly accelerated technological developments, in addition to the transition from the economy in its traditional form to the digital economy, which has resulted in the emergence of new tools that are appropriate to those developments, and from this, this paper attempts to explore the economic characteristics of the bitcoin currency that circulated recently. Due to the many advantages that distinguish it from money in its traditional forms, which have a range of economic effects. The study found that Bitcoin is among the technological innovations, which contain a set of characteristics that are worth studying, those that make it the focus of attention, such as the digital currency, the peer-to-peer property, Lower and Faster Transaction Costs, transparency, decentralized control, privacy, and Double-Spending, as well as security and Cryptographic, and finally mining.Keywords: Digital Economics, Digital Currencies, Bitcoin, Features of Bitcoin
Procedia PDF Downloads 1437819 A Cognitive Schema of Architectural Designing Activity
Authors: Abdelmalek Arrouf
Abstract:
This article sets up a cognitive schema of the architectural designing activity. It begins by outlining, theoretically, an a priori model of its general cognitive mechanisms. The obtained theoretical framework represents the designing activity as a complex system composed of three interrelated subsystems of cognitive actions: a subsystem of meaning production, one of morphology production and finally a subsystem of navigation between the two formers. A protocol analysis that uses statistical and informational tools is then used to measure the validity of the built schema. The model thus achieved shows that the designer begins by conceiving abstract meanings, which he then translates into shapes. That’s why we call it a semio-morphic model of the designing activity.Keywords: designing actions, model of the design process, morphosis, protocol analysis, semiosis
Procedia PDF Downloads 1767818 Compost Bioremediation of Oil Refinery Sludge by Using Different Manures in a Laboratory Condition
Authors: O. Ubani, H. I. Atagana, M. S. Thantsha
Abstract:
This study was conducted to measure the reduction in polycyclic aromatic hydrocarbons (PAHs) content in oil sludge by co-composting the sludge with pig, cow, horse and poultry manures under laboratory conditions. Four kilograms of soil spiked with 800 g of oil sludge was co-composted differently with each manure in a ratio of 2:1 (w/w) spiked soil:manure and wood-chips in a ratio of 2:1 (w/v) spiked soil:wood-chips. Control was set up similar as the one above but without manure. Mixtures were incubated for 10 months at room temperature. Compost piles were turned weekly and moisture level was maintained at between 50% and 70%. Moisture level, pH, temperature, CO2 evolution and oxygen consumption were measured monthly and the ash content at the end of experimentation. Bacteria capable of utilizing PAHs were isolated, purified and characterized by molecular techniques using polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE), amplification of the 16S rDNA gene using the specific primers (16S-P1 PCR and 16S-P2 PCR) and the amplicons were sequenced. Extent of reduction of PAHs was measured using automated soxhlet extractor with dichloromethane as the extraction solvent coupled with gas chromatography/mass spectrometry (GC/MS). Temperature did not exceed 27.5O°C in all compost heaps, pH ranged from 5.5 to 7.8 and CO2 evolution was highest in poultry manure at 18.78 µg/dwt/day. Microbial growth and activities were enhanced. Bacteria identified were Bacillus, Arthrobacter and Staphylococcus species. Results from PAH measurements showed reduction between 77 and 99%. The results from the control experiments may be because it was invaded by fungi. Co-composting of spiked soils with animal manures enhanced the reduction in PAHs. Interestingly, all bacteria isolated and identified in this study were present in all treatments, including the control.Keywords: bioremediation, co-composting, oil refinery sludge, PAHs, bacteria spp, animal manures, molecular techniques
Procedia PDF Downloads 4807817 Leadership and Whether It Stems from Innate Abilities or from Situation
Authors: Salwa Abdelbaki
Abstract:
This research investigated how leaders develop, asking whether they have been leaders due to their innate abilities or they gain leadership characteristics through interactions based on requirements of a situation. If the first is true, then a leader should be successful in any situation. Otherwise, a leader may succeed only in a specific situation. A series of experiments were carried out on three groups including of males and females. First; a group of 148 students with different specializations had to select a leader. Another group of 51 students had to recall their previous experiences and their knowledge of each other to identify who were leaders in different situations. Then a series of analytic tools were applied to the identified leaders and to the whole groups to find out how leaders were developed. A group of 40 young children was also experimented with to find young leaders among them and to analyze their characteristics.Keywords: leadership, innate characteristics, situation, leadership theories
Procedia PDF Downloads 2907816 Experimental Evaluation of Electrocoagulation for Hardness Removal of Bore Well Water
Authors: Pooja Kumbhare
Abstract:
Water is an important resource for the survival of life. The inadequate availability of surface water makes people depend on ground water for fulfilling their needs. However, ground water is generally too hard to satisfy the requirements for domestic as well as industrial applications. Removal of hardness involves various techniques such as lime soda process, ion exchange, reverse osmosis, nano-filtration, distillation, and, evaporation, etc. These techniques have individual problems such as high annual operating cost, sediment formation on membrane, sludge disposal problem, etc. Electrocoagulation (EC) is being explored as modern and cost-effective technology to cope up with the growing demand of high water quality at the consumer end. In general, earlier studies on electrocoagulation for hardness removal are found to deploy batch processes. As batch processes are always inappropriate to deal with large volume of water to be treated, it is essential to develop continuous flow EC process. So, in the present study, an attempt is made to investigate continuous flow EC process for decreasing excessive hardness of bore-well water. The experimental study has been conducted using 12 aluminum electrodes (25cm*10cm, 1cm thick) provided in EC reactor with volume of 8 L. Bore well water sample, collected from a local bore-well (i.e. at – Vishrambag, Sangli; Maharashtra) having average initial hardness of 680 mg/l (Range: 650 – 700 mg/l), was used for the study. Continuous flow electrocoagulation experiments were carried out by varying operating parameters specifically reaction time (Range: 10 – 60 min), voltage (Range: 5 – 20 V), current (Range: 1 – 5A). Based on the experimental study, it is found that hardness removal to the desired extent could be achieved even for continuous flow EC reactor, so the use of it is found promising.Keywords: hardness, continuous flow EC process, aluminum electrode, optimal operating parameters
Procedia PDF Downloads 1827815 JREM: An Approach for Formalising Models in the Requirements Phase with JSON and NoSQL Databases
Authors: Aitana Alonso-Nogueira, Helia Estévez-Fernández, Isaías García
Abstract:
This paper presents an approach to reduce some of its current flaws in the requirements phase inside the software development process. It takes the software requirements of an application, makes a conceptual modeling about it and formalizes it within JSON documents. This formal model is lodged in a NoSQL database which is document-oriented, that is, MongoDB, because of its advantages in flexibility and efficiency. In addition, this paper underlines the contributions of the detailed approach and shows some applications and benefits for the future work in the field of automatic code generation using model-driven engineering tools.Keywords: conceptual modelling, JSON, NoSQL databases, requirements engineering, software development
Procedia PDF Downloads 3817814 ESP: Peculiarities of Teaching Psychology in English to Russian Students
Authors: Ekaterina A. Redkina
Abstract:
The necessity and importance of teaching professionally oriented content in English needs no proof nowadays. Consequently, the ability to share personal ESP teaching experience seems of great importance. This paper is based on the 8-year ESP and EFL teaching experience at the Moscow State Linguistic University, Moscow, Russia, and presents theoretical analysis of specifics, possible problems, and perspectives of teaching Psychology in English to Russian psychology-students. The paper concerns different issues that are common for different ESP classrooms, and familiar to different teachers. Among them are: designing ESP curriculum (for psychologists in this case), finding the balance between content and language in the classroom, main teaching principles (the 4 C’s), the choice of assessment techniques and teaching material. The main objective of teaching psychology in English to Russian psychology students is developing knowledge and skills essential for professional psychologists. Belonging to international professional community presupposes high-level content-specific knowledge and skills, high level of linguistic skills and cross-cultural linguistic ability and finally high level of professional etiquette. Thus, teaching psychology in English pursues 3 main outcomes, such as content, language and professional skills. The paper provides explanation of each of the outcomes. Examples are also given. Particular attention is paid to the lesson structure, its objectives and the difference between a typical EFL and ESP lesson. There is also made an attempt to find commonalities between teaching ESP and CLIL. There is an approach that states that CLIL is more common for schools, while ESP is more common for higher education. The paper argues that CLIL methodology can be successfully used in ESP teaching and that many CLIL activities are also well adapted for professional purposes. The research paper provides insights into the process of teaching psychologists in Russia, real teaching experience and teaching techniques that have proved efficient over time.Keywords: ESP, CLIL, content, language, psychology in English, Russian students
Procedia PDF Downloads 6177813 A Combined AHP-GP Model for Selecting Knowledge Management Tool
Authors: Ahmad Sarfaraz, Raiyad Herwies
Abstract:
In this paper, a multi-criteria decision making analysis is used to help any organization selects the best KM tool that fits and serves its needs. The AHP model is used based on a previous study to highlight and identify the main criteria and sub-criteria that are incorporated in the selection process. Different KM tools alternatives with different criteria are compared and weighted accurately to be incorporated in the GP model. The main goal is to combine the GP model with the AHP model to ensure that selecting the KM tool considers the resource constraints. Two important issues are discussed in this paper: how different factors could be taken into consideration in forming the AHP model, and how to incorporate the AHP results into the GP model for better results.Keywords: knowledge management, analytical hierarchy process, goal programming, multi-criteria decision making
Procedia PDF Downloads 3907812 The Potential Impact of Big Data Analytics on Pharmaceutical Supply Chain Management
Authors: Maryam Ziaee, Himanshu Shee, Amrik Sohal
Abstract:
Big Data Analytics (BDA) in supply chain management has recently drawn the attention of academics and practitioners. Big data refers to a massive amount of data from different sources, in different formats, generated at high speed through transactions in business environments and supply chain networks. Traditional statistical tools and techniques find it difficult to analyse this massive data. BDA can assist organisations to capture, store, and analyse data specifically in the field of supply chain. Currently, there is a paucity of research on BDA in the pharmaceutical supply chain context. In this research, the Australian pharmaceutical supply chain was selected as the case study. This industry is highly significant since the right medicine must reach the right patients, at the right time, in right quantity, in good condition, and at the right price to save lives. However, drug shortages remain a substantial problem for hospitals across Australia with implications on patient care, staff resourcing, and expenditure. Furthermore, a massive volume and variety of data is generated at fast speed from multiple sources in pharmaceutical supply chain, which needs to be captured and analysed to benefit operational decisions at every stage of supply chain processes. As the pharmaceutical industry lags behind other industries in using BDA, it raises the question of whether the use of BDA can improve transparency among pharmaceutical supply chain by enabling the partners to make informed-decisions across their operational activities. This presentation explores the impacts of BDA on supply chain management. An exploratory qualitative approach was adopted to analyse data collected through interviews. This study also explores the BDA potential in the whole pharmaceutical supply chain rather than focusing on a single entity. Twenty semi-structured interviews were undertaken with top managers in fifteen organisations (five pharmaceutical manufacturers, five wholesalers/distributors, and five public hospital pharmacies) to investigate their views on the use of BDA. The findings revealed that BDA can enable pharmaceutical entities to have improved visibility over the whole supply chain and also the market; it enables entities, especially manufacturers, to monitor consumption and the demand rate in real-time and make accurate demand forecasts which reduce drug shortages. Timely and precise decision-making can allow the entities to source and manage their stocks more effectively. This can likely address the drug demand at hospitals and respond to unanticipated issues such as drug shortages. Earlier studies explore BDA in the context of clinical healthcare; however, this presentation investigates the benefits of BDA in the Australian pharmaceutical supply chain. Furthermore, this research enhances managers’ insight into the potentials of BDA at every stage of supply chain processes and helps to improve decision-making in their supply chain operations. The findings will turn the rhetoric of data-driven decision into a reality where the managers may opt for analytics for improved decision-making in the supply chain processes.Keywords: big data analytics, data-driven decision, pharmaceutical industry, supply chain management
Procedia PDF Downloads 1107811 Assessment of the Electrical, Mechanical, and Thermal Nociceptive Thresholds for Stimulation and Pain Measurements at the Bovine Hind Limb
Authors: Samaneh Yavari, Christiane Pferrer, Elisabeth Engelke, Alexander Starke, Juergen Rehage
Abstract:
Background: Three nociceptive thresholds of thermal, electrical, and mechanical thresholds commonly use to evaluate the local anesthesia in many species, for instance, cow, horse, cat, dog, rabbit, and so on. Due to the lack of investigations to evaluate and/or validate such those nociceptive thresholds, our plan was the comparison of two-foot local anesthesia methods of Intravenous Regional Anesthesia (IVRA) and our modified four-point Nerve Block Anesthesia (NBA). Materials and Methods: Eight healthy nonpregnant nondairy Holstein Frisian cows in a cross-over study design were selected for this study. All cows divided into two different groups to receive two local anesthesia techniques of IVRA and our modified four-point NBA. Three thermal, electrical, and mechanical force and pinpricks were applied to evaluate the quality of local anesthesia methods before and after local anesthesia application. Results: The statistical evaluation demonstrated that our four-point NBA has a qualification to select as a standard foot local anesthesia. However, the recorded results of our study revealed no significant difference between two groups of local anesthesia techniques of IVRA and modified four-point NBA related to quality and duration of anesthesia stimulated by electrical, mechanical and thermal nociceptive stimuli. Conclusion and discussion: All three nociceptive threshold stimuli of electrical, mechanical and heat nociceptive thresholds can be applied to measure and evaluate the efficacy of foot local anesthesia of dairy cows. However, our study revealed no superiority of those three nociceptive methods to evaluate the duration and quality of bovine foot local anesthesia methods. Veterinarians to investigate the duration and quality of their selected anesthesia method can use any of those heat, mechanical, and electrical methods.Keywords: mechanical, thermal, electrical threshold, IVRA, NBA, hind limb, dairy cow
Procedia PDF Downloads 2497810 Investigation of the Morphology of SiO2 Nano-Particles Using Different Synthesis Techniques
Authors: E. Gandomkar, S. Sabbaghi
Abstract:
In this paper, the effects of variation synthesized methods on morphology and size of silica nanostructure via modifying sol-gel and precipitation method have been investigated. Meanwhile, resulting products have been characterized by particle size analyzer, scanning electron microscopy (SEM), X-ray Diffraction (XRD) and Fourier transform infrared (FT-IR) spectra. As result, the shape of SiO2 with sol-gel and precipitation methods was spherical but with modifying sol-gel method we have been had nanolayer structure.Keywords: modified sol-gel, precipitation, nanolayer, Na2SiO3, nanoparticle
Procedia PDF Downloads 2967809 Opto-Mechanical Characterization of Aspheric Lenses from the Hybrid Method
Authors: Aliouane Toufik, Hamdi Amine, Bouzid Djamel
Abstract:
Aspheric optical components are an alternative to the use of conventional lenses in the implementation of imaging systems for the visible range. Spherical lenses are capable of producing aberrations. Therefore, they are not able to focus all the light into a single point. Instead, aspherical lenses correct aberrations and provide better resolution even with compact lenses incorporating a small number of lenses. Metrology of these components is very difficult especially when the resolution requirements increase and insufficient or complexity of conventional tools requires the development of specific approaches to characterization. This work is part of the problem existed because the objectives are the study and comparison of different methods used to measure surface rays hybrid aspherical lenses.Keywords: manufacture of lenses, aspherical surface, precision molding, radius of curvature, roughness
Procedia PDF Downloads 4727808 Gnss Aided Photogrammetry for Digital Mapping
Authors: Muhammad Usman Akram
Abstract:
This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry
Procedia PDF Downloads 377807 Flood Vulnerability Zoning for Blue Nile Basin Using Geospatial Techniques
Authors: Melese Wondatir
Abstract:
Flooding ranks among the most destructive natural disasters, impacting millions of individuals globally and resulting in substantial economic, social, and environmental repercussions. This study's objective was to create a comprehensive model that assesses the Nile River basin's susceptibility to flood damage and improves existing flood risk management strategies. Authorities responsible for enacting policies and implementing measures may benefit from this research to acquire essential information about the flood, including its scope and susceptible areas. The identification of severe flood damage locations and efficient mitigation techniques were made possible by the use of geospatial data. Slope, elevation, distance from the river, drainage density, topographic witness index, rainfall intensity, distance from road, NDVI, soil type, and land use type were all used throughout the study to determine the vulnerability of flood damage. Ranking elements according to their significance in predicting flood damage risk was done using the Analytic Hierarchy Process (AHP) and geospatial approaches. The analysis finds that the most important parameters determining the region's vulnerability are distance from the river, topographic witness index, rainfall, and elevation, respectively. The consistency ratio (CR) value obtained in this case is 0.000866 (<0.1), which signifies the acceptance of the derived weights. Furthermore, 10.84m2, 83331.14m2, 476987.15m2, 24247.29m2, and 15.83m2 of the region show varying degrees of vulnerability to flooding—very low, low, medium, high, and very high, respectively. Due to their close proximity to the river, the northern-western regions of the Nile River basin—especially those that are close to Sudanese cities like Khartoum—are more vulnerable to flood damage, according to the research findings. Furthermore, the AUC ROC curve demonstrates that the categorized vulnerability map achieves an accuracy rate of 91.0% based on 117 sample points. By putting into practice strategies to address the topographic witness index, rainfall patterns, elevation fluctuations, and distance from the river, vulnerable settlements in the area can be protected, and the impact of future flood occurrences can be greatly reduced. Furthermore, the research findings highlight the urgent requirement for infrastructure development and effective flood management strategies in the northern and western regions of the Nile River basin, particularly in proximity to major towns such as Khartoum. Overall, the study recommends prioritizing high-risk locations and developing a complete flood risk management plan based on the vulnerability map.Keywords: analytic hierarchy process, Blue Nile Basin, geospatial techniques, flood vulnerability, multi-criteria decision making
Procedia PDF Downloads 767806 Impact of Research-Informed Teaching and Case-Based Teaching on Memory Retention and Recall in University Students
Authors: Durvi Yogesh Vagani
Abstract:
This research paper explores the effectiveness of Research-informed teaching and Case-based teaching in enhancing the retention and recall of memory during discussions among university students. Additionally, it investigates the impact of using Artificial Intelligence (AI) tools on the quality of research conducted by students and its correlation with better recollection. The study hypothesizes that Case-based teaching will lead to greater recall and storage of information. The research gap in the use of AI in educational settings, particularly with actual participants, is addressed by leveraging a multi-method approach. The hypothesis is that the use of AI, such as ChatGPT and Bard, would lead to better retention and recall of information. Before commencing the study, participants' attention levels and IQ were assessed using the Digit Span Test and the Wechsler Adult Intelligence Scale, respectively, to ensure comparability among participants. Subsequently, participants were divided into four conditions, each group receiving identical information presented in different formats based on their assigned condition. Following this, participants engaged in a group discussion on the given topic. Their responses were then evaluated against a checklist. Finally, participants completed a brief test to measure their recall ability after the discussion. Preliminary findings suggest that students who utilize AI tools for learning demonstrate improved grasping of information and are more likely to integrate relevant information into discussions compared to providing extraneous details. Furthermore, Case-based teaching fosters greater attention and recall during discussions, while Research-informed teaching leads to greater knowledge for application. By addressing the research gap in AI application in education, this study contributes to a deeper understanding of effective teaching methodologies and the role of technology in student learning outcomes. The implication of the present research is to tailor teaching methods based on the subject matter. Case-based teaching facilitates application-based teaching, and research-based teaching can be beneficial for theory-heavy topics. Integrating AI in education. Combining AI with research-based teaching may optimize instructional strategies and deepen learning experiences. This research suggests tailoring teaching methods in psychology based on subject matter. Case-based teaching suits practical subjects, facilitating application, while research-based teaching aids understanding of theory-heavy topics. Integrating AI in education could enhance learning outcomes, offering detailed information tailored to students' needs.Keywords: artificial intelligence, attention, case-based teaching, memory recall, memory retention, research-informed teaching
Procedia PDF Downloads 367805 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods
Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo
Abstract:
The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines
Procedia PDF Downloads 6237804 An AI-generated Semantic Communication Platform in HCI Course
Authors: Yi Yang, Jiasong Sun
Abstract:
Almost every aspect of our daily lives is now intertwined with some degree of human-computer interaction (HCI). HCI courses draw on knowledge from disciplines as diverse as computer science, psychology, design principles, anthropology, and more. Our HCI courses, named the Media and Cognition course, are constantly updated to reflect state-of-the-art technological advancements such as virtual reality, augmented reality, and artificial intelligence-based interactions. For more than a decade, our course has used an interest-based approach to teaching, in which students proactively propose some research-based questions and collaborate with teachers, using course knowledge to explore potential solutions. Semantic communication plays a key role in facilitating understanding and interaction between users and computer systems, ultimately enhancing system usability and user experience. The advancements in AI-generated technology, which have gained significant attention from both academia and industry in recent years, are exemplified by language models like GPT-3 that generate human-like dialogues from given prompts. Our latest version of the Human-Computer Interaction course practices a semantic communication platform based on AI-generated techniques. The purpose of this semantic communication is twofold: to extract and transmit task-specific information while ensuring efficient end-to-end communication with minimal latency. An AI-generated semantic communication platform evaluates the retention of signal sources and converts low-retain ability visual signals into textual prompts. These data are transmitted through AI-generated techniques and reconstructed at the receiving end; on the other hand, visual signals with a high retain ability rate are compressed and transmitted according to their respective regions. The platform and associated research are a testament to our students' growing ability to independently investigate state-of-the-art technologies.Keywords: human-computer interaction, media and cognition course, semantic communication, retainability, prompts
Procedia PDF Downloads 1237803 Capture Zone of a Well Field in an Aquifer Bounded by Two Parallel Streams
Authors: S. Nagheli, N. Samani, D. A. Barry
Abstract:
In this paper, the velocity potential and stream function of capture zone for a well field in an aquifer bounded by two parallel streams with or without a uniform regional flow of any directions are presented. The well field includes any number of extraction or injection wells or a combination of both types with any pumping rates. To delineate the capture envelope, the potential and streamlines equations are derived by conformal mapping method. This method can help us to release constrains of other methods. The equations can be applied as useful tools to design in-situ groundwater remediation systems, to evaluate the surface–subsurface water interaction and to manage the water resources.Keywords: complex potential, conformal mapping, image well theory, Laplace’s equation, superposition principle
Procedia PDF Downloads 4357802 Physiological and Biochemical Based Analysis to Assess the Efficacy of Mulch under Partial Root Zone Drying in Wheat
Authors: Salman Ahmad, Muhammad Aown Sammar Raza, Muhammad Farrukh Saleem, Rashid Iqbal, Muhammad Saqlain Zaheer, Muhammad Usman Aslam, Imran Haider, Muhammad Adnan Nazar, Muhammad Ali
Abstract:
Among the various abiotic stresses, drought stress is one of the most challenging for field crops. Wheat is one of the major staple food of the world, which is highly affected by water deficit stress in the current scenario of climate change. In order to ensure food security by depleting water resources, there is an urgent need to adopt technologies which result in sufficient crop yield with less water consumption. Mulching and partial rootzone drying (PRD) are two important management techniques used for water conservation and to mitigate the negative impacts of drought. The experiment was conducted to screen out the best-suited mulch for wheat under PRD system. Two water application techniques (I1= full irrigation I2= PRD irrigation) and four mulch treatments (M0= un-mulched, M1= black plastic mulch, M2= wheat straw mulch and M4= cotton sticks mulch) were conducted in completely randomized design with four replications. The treatment, black plastic mulch was performed the best than other mulch treatments. For irrigation levels, higher values of growth, physiological and water-related parameters were recorded in control treatment while, quality traits and enzymatic activities were higher under partial root zone drying. The current study concluded that adverse effects of drought on wheat can be significantly mitigated by using mulches but black plastic mulch was best suited for partial rootzone drying irrigation system in wheat.Keywords: antioxidants, leaf water relations, Mulches, osmolytes, partial root zone drying, photosynthesis
Procedia PDF Downloads 2677801 Investigating Reading Comprehension Proficiency and Self-Efficacy among Algerian EFL Students within Collaborative Strategic Reading Approach and Attributional Feedback Intervention
Authors: Nezha Badi
Abstract:
It has been shown in the literature that Algerian university students suffer from low levels of reading comprehension proficiency, which hinder their overall proficiency in English. This low level is mainly related to the methodology of teaching reading which is employed by the teacher in the classroom (a teacher-centered environment), as well as students’ poor sense of self-efficacy to undertake reading comprehension activities. Arguably, what is needed is an approach necessary for enhancing students’ self-beliefs about their abilities to deal with different reading comprehension activities. This can be done by providing them with opportunities to take responsibility for their own learning (learners’ autonomy). As a result of learning autonomy, learners’ beliefs about their abilities to deal with certain language tasks may increase, and hence, their language learning ability. Therefore, this experimental research study attempts to assess the extent to which an integrated approach combining one particular reading approach known as ‘collaborative strategic reading’ (CSR), and teacher’s attributional feedback (on students’ reading performance and strategy use) can improve the reading comprehension skill and the sense of self-efficacy of EFL Algerian university students. It also seeks to examine students’ main reasons for their successful or unsuccessful achievements in reading comprehension activities, and whether students’ attributions for their reading comprehension outcomes can be modified after exposure to the instruction. To obtain the data, different tools including a reading comprehension test, questionnaires, an observation, an interview, and learning logs were used with 105 second year Algerian EFL university students. The sample of the study was divided into three groups; one control group (with no treatment), one experimental group (CSR group) who received a CSR instruction, and a second intervention group (CSR Plus group) who received teacher’s attribution feedback in addition to the CSR intervention. Students in the CSR Plus group received the same experiment as the CSR group using the same tools, except that they were asked to keep learning logs, for which teacher’s feedback on reading performance and strategy use was provided. The results of this study indicate that the CSR and the attributional feedback intervention was effective in improving students’ reading comprehension proficiency and sense of self-efficacy. However, there was not a significant change in students’ adaptive and maladaptive attributions for their success and failure d from the pre-test to the post-test phase. Analysis of the perception questionnaire, the interview, and the learning logs shows that students have positive perceptions about the CSR and the attributional feedback instruction. Based on the findings, this study, therefore, seeks to provide EFL teachers in general and Algerian EFL university teachers in particular with pedagogical implications on how to teach reading comprehension to their students to help them achieve well and feel more self-efficacious in reading comprehension activities, and in English language learning more generally.Keywords: attributions, attributional feedback, collaborative strategic reading, self-efficacy
Procedia PDF Downloads 120