Search results for: programming
123 Expression of Micro-RNA268 in Zinc Deficient Rice
Authors: Sobia Shafqat, Saeed Ahmad Qaisrani
Abstract:
MicroRNAs play an essential role in the regulation and development of all processes in most eukaryotes because of their prospective part as mediators controlling cell growth and differentiation towards the exact position of RNAs response in plants under biotic and abiotic factors or stressors. In a few cases, Zn is oblivious poisonous for plants due to its heavy metal status. Some other metals are extremely toxic, like Cd, Hg, and Pb, but these elements require in rice for the programming of genes under abiotic stress resembling Zn stress when micro RNAs268 was importantly introduced in rice. The micro RNAs overexpressed in transgenic plants with an accumulation of a large amount of melanin dialdehyde, hydrogen peroxide, and an excessive quantity of Zn in the seedlings stage. Let out results for rice pliability under Zn stress micro RNAs act as negative controllers. But the role of micro RNA268 act as a modulator in different ecological condition. It has been explained clearly with a long understanding of the role of micro RNA268 under stress conditions; pliability and practically showed outcome to increase plant sufferance under Zn stress because micro RNAs is an intervention technique for gene regulation in gene expression. The proposed study was experimented with by using genetic factors of Zn stress and toxicity effect on rice plants done at District Vehari, Pakistan. The trial was performed randomly with three replications in a complete block design (RCBD). These blocks were controlled with different concentrations of genetic factors. By overexpression of micro RNA268 rice, seedling growth was not stopped under Zn deficiency due to the accumulation of a large amount of melanin dialdehyde, hydrogen peroxide, and an excessive quantity of Zn in their seedlings. Results showed that micro RNA268 act as a negative controller under Zn stress. In the end, under stress conditions, micro RNA268 showed the necessary function in the tolerance of rice plants. The directorial work sketch gave out high agronomic applications and yield outcomes in rice with a specific amount of Zn application.Keywords: micro RNA268, zinc, rice, agronomic approach
Procedia PDF Downloads 61122 Testing and Validation Stochastic Models in Epidemiology
Authors: Snigdha Sahai, Devaki Chikkavenkatappa Yellappa
Abstract:
This study outlines approaches for testing and validating stochastic models used in epidemiology, focusing on the integration and functional testing of simulation code. It details methods for combining simple functions into comprehensive simulations, distinguishing between deterministic and stochastic components, and applying tests to ensure robustness. Techniques include isolating stochastic elements, utilizing large sample sizes for validation, and handling special cases. Practical examples are provided using R code to demonstrate integration testing, handling of incorrect inputs, and special cases. The study emphasizes the importance of both functional and defensive programming to enhance code reliability and user-friendliness.Keywords: computational epidemiology, epidemiology, public health, infectious disease modeling, statistical analysis, health data analysis, disease transmission dynamics, predictive modeling in health, population health modeling, quantitative public health, random sampling simulations, randomized numerical analysis, simulation-based analysis, variance-based simulations, algorithmic disease simulation, computational public health strategies, epidemiological surveillance, disease pattern analysis, epidemic risk assessment, population-based health strategies, preventive healthcare models, infection dynamics in populations, contagion spread prediction models, survival analysis techniques, epidemiological data mining, host-pathogen interaction models, risk assessment algorithms for disease spread, decision-support systems in epidemiology, macro-level health impact simulations, socioeconomic determinants in disease spread, data-driven decision making in public health, quantitative impact assessment of health policies, biostatistical methods in population health, probability-driven health outcome predictions
Procedia PDF Downloads 6121 Evaluation of the Effect of Lactose Derived Monosaccharide on Galactooligosaccharides Production by β-Galactosidase
Authors: Yenny Paola Morales Cortés, Fabián Rico Rodríguez, Juan Carlos Serrato Bermúdez, Carlos Arturo Martínez Riascos
Abstract:
Numerous benefits of galactooligosaccharides (GOS) as prebiotics have motivated the study of enzymatic processes for their production. These processes have special complexities due to several factors that make difficult high productivity, such as enzyme type, reaction medium pH, substrate concentrations and presence of inhibitors, among others. In the present work the production of galactooligosaccharides (with different degrees of polymerization: two, three and four) from lactose was studied. The study considers the formulation of a mathematical model that predicts the production of GOS from lactose using the enzyme β-galactosidase. The effect of pH in the reaction was studied. For that, phosphate buffer was used and with this was evaluated three pH values (6.0.6.5 and 7.0). Thus it was observed that at pH 6.0 the enzymatic activity insignificant. On the other hand, at pH 7.0 the enzymatic activity was approximately 27 times greater than at 6.5. The last result differs from previously reported results. Therefore, pH 7.0 was chosen as working pH. Additionally, the enzyme concentration was analyzed, which allowed observing that the effect of the concentration depends on the pH and the concentration was set for the following studies in 0.272 mM. Afterwards, experiments were performed varying the lactose concentration to evaluate its effects on the process and to generate the data for the adjustment of the mathematical model parameters. The mathematical model considers the reactions of lactose hydrolysis and transgalactosylation for the production of disaccharides and trisaccharides, with their inverse reactions. The production of tetrasaccharides was negligible and, because of that, it was not included in the model. The reaction was monitored by HPLC and for the quantitative analysis of the experimental data the Matlab programming language was used, including solvers for differential equations systems integration (ode15s) and nonlinear problems optimization (fminunc). The results confirm that the transgalactosylation and hydrolysis reactions are reversible, additionally inhibition by glucose and galactose is observed on the production of GOS. In relation to the production process of galactooligosaccharides, the results show that it is necessary to have high initial concentrations of lactose considering that favors the transgalactosylation reaction, while low concentrations favor hydrolysis reactions.Keywords: β-galactosidase, galactooligosaccharides, inhibition, lactose, Matlab, modeling
Procedia PDF Downloads 358120 Development of Multi-Leaf Collimator-Based Isocenter Verification Tool Using Electrical Portal Imaging Device for Stereotactic Radiosurgery
Authors: Panatda Intanin, Sangutid Thongsawad, Chirapha Tannanonta, Todsaporn Fuangrod
Abstract:
Stereotactic radiosurgery (SRS) is a highly precision delivery technique that requires comprehensive quality assurance (QA) tests prior to treatment delivery. An isocenter of delivery beam plays a critical role that affect the treatment accuracy. The uncertainty of isocenter is traditionally accessed using circular cone equipment, Winston-Lutz (WL) phantom and film. This technique is considered time consuming and highly dependent on the observer. In this work, the development of multileaf collimator (MLC)-based isocenter verification tool using electronic portal imaging device (EPID) was proposed and evaluated. A mechanical isocenter alignment with ball bearing diameter 5 mm and circular cone diameter 10 mm fixed to gantry head defines the radiation field was set as the conventional WL test method. The conventional setup was to compare to the proposed setup; using MLC (10 x 10 mm) to define the radiation filed instead of cone. This represents more realistic delivery field than using circular cone equipment. The acquisition from electronic portal imaging device (EPID) and radiographic film were performed in both experiments. The gantry angles were set as following: 0°, 90°, 180° and 270°. A software tool was in-house developed using MATLAB/SIMULINK programming to determine the centroid of radiation field and shadow of WL phantom automatically. This presents higher accuracy than manual measurement. The deviation between centroid of both cone-based and MLC-based WL tests were quantified. To compare between film and EPID image, the deviation for all gantry angle was 0.26±0.19mm and 0.43±0.30 for cone-based and MLC-based WL tests. For the absolute deviation calculation on EPID images between cone and MLC-based WL test was 0.59±0.28 mm and the absolute deviation on film images was 0.14±0.13 mm. Therefore, the MLC-based isocenter verification using EPID present high sensitivity tool for SRS QA.Keywords: isocenter verification, quality assurance, EPID, SRS
Procedia PDF Downloads 152119 Streamlining .NET Data Access: Leveraging JSON for Data Operations in .NET
Authors: Tyler T. Procko, Steve Collins
Abstract:
New features in .NET (6 and above) permit streamlined access to information residing in JSON-capable relational databases, such as SQL Server (2016 and above). Traditional methods of data access now comparatively involve unnecessary steps which compromise system performance. This work posits that the established ORM (Object Relational Mapping) based methods of data access in applications and APIs result in common issues, e.g., object-relational impedance mismatch. Recent developments in C# and .NET Core combined with a framework of modern SQL Server coding conventions have allowed better technical solutions to the problem. As an amelioration, this work details the language features and coding conventions which enable this streamlined approach, resulting in an open-source .NET library implementation called Codeless Data Access (CODA). Canonical approaches rely on ad-hoc mapping code to perform type conversions between the client and back-end database; with CODA, no mapping code is needed, as JSON is freely mapped to SQL and vice versa. CODA streamlines API data access by improving on three aspects of immediate concern to web developers, database engineers and cybersecurity professionals: Simplicity, Speed and Security. Simplicity is engendered by cutting out the “middleman” steps, effectively making API data access a whitebox, whereas traditional methods are blackbox. Speed is improved because of the fewer translational steps taken, and security is improved as attack surfaces are minimized. An empirical evaluation of the speed of the CODA approach in comparison to ORM approaches ] is provided and demonstrates that the CODA approach is significantly faster. CODA presents substantial benefits for API developer workflows by simplifying data access, resulting in better speed and security and allowing developers to focus on productive development rather than being mired in data access code. Future considerations include a generalization of the CODA method and extension outside of the .NET ecosystem to other programming languages.Keywords: API data access, database, JSON, .NET core, SQL server
Procedia PDF Downloads 66118 Beyond Information Failure and Misleading Beliefs in Conditional Cash Transfer Programs: A Qualitative Account of Structural Barriers Explaining Why the Poor Do Not Invest in Human Capital in Northern Mexico
Authors: Francisco Fernandez de Castro
Abstract:
The Conditional Cash Transfer (CCT) model gives monetary transfers to beneficiary families on the condition that they take specific education and health actions. According to the economic rationale of CCTs the poor need incentives to invest in their human capital because they are trapped by a lack of information and misleading beliefs. If left to their own decision, the poor will not be able to choose what is in their best interests. The basic assumption of the CCT model is that the poor need incentives to take care of their own education and health-nutrition. Due to the incentives (income cash transfers and conditionalities), beneficiary families are supposed to attend doctor visits and health talks. Children would stay in the school. These incentivized behaviors would produce outcomes such as better health and higher level of education, which in turn will reduce poverty. Based on a grounded theory approach to conduct a two-year period of qualitative data collection in northern Mexico, this study shows that this explanation is incomplete. In addition to the information failure and inadequate beliefs, there are structural barriers in everyday life of households that make health-nutrition and education investments difficult. In-depth interviews and observation work showed that the program takes for granted local conditions in which beneficiary families should fulfill their co-responsibilities. Data challenged the program’s assumptions and unveiled local obstacles not contemplated in the program’s design. These findings have policy and research implications for the CCT agenda. They bring elements for late programming due to the gap between the CCT strategy as envisioned by policy designers, and the program that beneficiary families experience on the ground. As for research consequences, these findings suggest new avenues for scholarly work regarding the causal mechanisms and social processes explaining CCT outcomes.Keywords: conditional cash transfers, incentives, poverty, structural barriers
Procedia PDF Downloads 113117 Designing and Implementing a Tourist-Guide Web Service Based on Volunteer Geographic Information Using Open-Source Technologies
Authors: Javad Sadidi, Ehsan Babaei, Hani Rezayan
Abstract:
The advent of web 2.0 gives a possibility to scale down the costs of data collection and mapping, specifically if the process is done by volunteers. Every volunteer can be thought of as a free and ubiquitous sensor to collect spatial, descriptive as well as multimedia data for tourist services. The lack of large-scale information, such as real-time climate and weather conditions, population density, and other related data, can be considered one of the important challenges in developing countries for tourists to make the best decision in terms of time and place of travel. The current research aims to design and implement a spatiotemporal web map service using volunteer-submitted data. The service acts as a tourist-guide service in which tourists can search interested places based on their requested time for travel. To design the service, three tiers of architecture, including data, logical processing, and presentation tiers, have been utilized. For implementing the service, open-source software programs, client and server-side programming languages (such as OpenLayers2, AJAX, and PHP), Geoserver as a map server, and Web Feature Service (WFS) standards have been used. The result is two distinct browser-based services, one for sending spatial, descriptive, and multimedia volunteer data and another one for tourists and local officials. Local official confirms the veracity of the volunteer-submitted information. In the tourist interface, a spatiotemporal search engine has been designed to enable tourists to find a tourist place based on province, city, and location at a specific time of interest. Implementing the tourist-guide service by this methodology causes the following: the current tourists participate in a free data collection and sharing process for future tourists, a real-time data sharing and accessing for all, avoiding a blind selection of travel destination and significantly, decreases the cost of providing such services.Keywords: VGI, tourism, spatiotemporal, browser-based, web mapping
Procedia PDF Downloads 98116 Informing, Enabling and Inspiring Social Innovation by Geographic Systems Mapping: A Case Study in Workforce Development
Authors: Cassandra A. Skinner, Linda R. Chamberlain
Abstract:
The nonprofit and public sectors are increasingly turning to Geographic Information Systems for data visualizations which can better inform programmatic and policy decisions. Additionally, the private and nonprofit sectors are turning to systems mapping to better understand the ecosystems within which they operate. This study explores the potential which combining these data visualization methods—a method which is called geographic systems mapping—to create an exhaustive and comprehensive understanding of a social problem’s ecosystem may have in social innovation efforts. Researchers with Grand Valley State University collaborated with Talent 2025 of West Michigan to conduct a mixed-methods research study to paint a comprehensive picture of the workforce development ecosystem in West Michigan. Using semi-structured interviewing, observation, secondary research, and quantitative analysis, data were compiled on workforce development organizations’ locations, programming, metrics for success, partnerships, funding sources, and service language. To best visualize and disseminate the data, a geographic system map was created which identifies programmatic, operational, and geographic gaps in workforce development services of West Michigan. By combining geographic and systems mapping methods, the geographic system map provides insight into the cross-sector relationships, collaboration, and competition which exists among and between workforce development organizations. These insights identify opportunities for and constraints around cross-sectoral social innovation in the West Michigan workforce development ecosystem. This paper will discuss the process utilized to prepare the geographic systems map, explain the results and outcomes, and demonstrate how geographic systems mapping illuminated the needs of the community and opportunities for social innovation. As complicated social problems like unemployment often require cross-sectoral and multi-stakeholder solutions, there is potential for geographic systems mapping to be a tool which informs, enables, and inspires these solutions.Keywords: cross-sector collaboration, data visualization, geographic systems mapping, social innovation, workforce development
Procedia PDF Downloads 295115 Adolescent-Parent Relationship as the Most Important Factor in Preventing Mood Disorders in Adolescents: An Application of Artificial Intelligence to Social Studies
Authors: Elżbieta Turska
Abstract:
Introduction: One of the most difficult times in a person’s life is adolescence. The experiences in this period may shape the future life of this person to a large extent. This is the reason why many young people experience sadness, dejection, hopelessness, sense of worthlessness, as well as losing interest in various activities and social relationships, all of which are often classified as mood disorders. As many as 15-40% adolescents experience depressed moods and for most of them they resolve and are not carried into adulthood. However, (5-6%) of those affected by mood disorders develop the depressive syndrome and as many as (1-3%) develop full-blown clinical depression. Materials: A large questionnaire was given to 2508 students, aged 13–16 years old, and one of its parts was the Burns checklist, i.e. the standard test for identifying depressed mood. The questionnaire asked about many aspects of the student’s life, it included a total of 53 questions, most of which had subquestions. It is important to note that the data suffered from many problems, the most important of which were missing data and collinearity. Aim: In order to identify the correlates of mood disorders we built predictive models which were then trained and validated. Our aim was not to be able to predict which students suffer from mood disorders but rather to explore the factors influencing mood disorders. Methods: The problems with data described above practically excluded using all classical statistical methods. For this reason, we attempted to use the following Artificial Intelligence (AI) methods: classification trees with surrogate variables, random forests and xgboost. All analyses were carried out with the use of the mlr package for the R programming language. Resuts: The predictive model built by classification trees algorithm outperformed the other algorithms by a large margin. As a result, we were able to rank the variables (questions and subquestions from the questionnaire) from the most to least influential as far as protection against mood disorder is concerned. Thirteen out of twenty most important variables reflect the relationships with parents. This seems to be a really significant result both from the cognitive point of view and also from the practical point of view, i.e. as far as interventions to correct mood disorders are concerned.Keywords: mood disorders, adolescents, family, artificial intelligence
Procedia PDF Downloads 101114 IoT Based Soil Moisture Monitoring System for Indoor Plants
Authors: Gul Rahim Rahimi
Abstract:
The IoT-based soil moisture monitoring system for indoor plants is designed to address the challenges of maintaining optimal moisture levels in soil for plant growth and health. The system utilizes sensor technology to collect real-time data on soil moisture levels, which is then processed and analyzed using machine learning algorithms. This allows for accurate and timely monitoring of soil moisture levels, ensuring plants receive the appropriate amount of water to thrive. The main objectives of the system are twofold: to keep plants fresh and healthy by preventing water deficiency and to provide users with comprehensive insights into the water content of the soil on a daily and hourly basis. By monitoring soil moisture levels, users can identify patterns and trends in water consumption, allowing for more informed decision-making regarding watering schedules and plant care. The scope of the system extends to the agriculture industry, where it can be utilized to minimize the efforts required by farmers to monitor soil moisture levels manually. By automating the process of soil moisture monitoring, farmers can optimize water usage, improve crop yields, and reduce the risk of plant diseases associated with over or under-watering. Key technologies employed in the system include the Capacitive Soil Moisture Sensor V1.2 for accurate soil moisture measurement, the Node MCU ESP8266-12E Board for data transmission and communication, and the Arduino framework for programming and development. Additionally, machine learning algorithms are utilized to analyze the collected data and provide actionable insights. Cloud storage is utilized to store and manage the data collected from multiple sensors, allowing for easy access and retrieval of information. Overall, the IoT-based soil moisture monitoring system offers a scalable and efficient solution for indoor plant care, with potential applications in agriculture and beyond. By harnessing the power of IoT and machine learning, the system empowers users to make informed decisions about plant watering, leading to healthier and more vibrant indoor environments.Keywords: IoT-based, soil moisture monitoring, indoor plants, water management
Procedia PDF Downloads 51113 Desing of Woven Fabric with Increased Sound Transmission Loss Property
Authors: U. Gunal, H. I. Turgut, H. Gurler, S. Kaya
Abstract:
There are many ever-increasing and newly emerging problems with rapid population growth in the world. With the increase in people's quality of life in our daily life, acoustic comfort has become an important feature in the textile industry. In order to meet all these expectations in people's comfort areas and survive in challenging competitive conditions in the market without compromising the customer product quality expectations of textile manufacturers, it has become a necessity to bring functionality to the products. It is inevitable to research and develop materials and processes that will bring these functionalities to textile products. The noise we encounter almost everywhere in our daily life, in the street, at home and work, is one of the problems which textile industry is working on. It brings with it many health problems, both mentally and physically. Therefore, noise control studies become more of an issue. Besides, materials used in noise control are not sufficient to reduce the effect of the noise level. The fabrics used in acoustic studies in the textile industry do not show sufficient performance according to their weight and high cost. Thus, acoustic textile products can not be used in daily life. In the thesis study, the attributions used in the noise control and building acoustics studies in the literature were analyzed, and the product with the highest damping value that a textile material will have was designed, manufactured, and tested. Optimum values were obtained by using different material samples that may affect the performance of the acoustic material. Acoustic measurement methods should be applied to verify the acoustic performances shown by the parameters and the designed three-dimensional structure at different values. In the measurements made in the study, the device designed for determining the acoustic performance of the material for both the impedance tube according to the relevant standards and the different noise types in the study was used. In addition, sound records of noise types encountered in daily life are taken and applied to the acoustic absorbent fabric with the aid of the device, and the feasibility of the results and the commercial ability of the product are examined. MATLAB numerical computing programming language and libraries were used in the frequency and sound power analyses made in the study.Keywords: acoustic, egg crate, fabric, textile
Procedia PDF Downloads 108112 Evaluating the Success of an Intervention Course in a South African Engineering Programme
Authors: Alessandra Chiara Maraschin, Estelle Trengove
Abstract:
In South Africa, only 23% of engineering students attain their degrees in the minimum time of 4 years. This begs the question: Why is the 4-year throughput rate so low? Improving the throughput rate is crucial in assisting students to the shortest possible path to completion. The Electrical Engineering programme has a fixed curriculum and students must pass all courses in order to graduate. In South Africa, as is the case in several other countries, many students rely on external funding such as bursaries from companies in industry. If students fail a course, they often lose their bursaries, and most might not be able to fund their 'repeating year' fees. It is thus important to improve the throughput rate, since for many students, graduating from university is a way out of poverty for an entire family. In Electrical Engineering, it has been found that the Software Development I course (an introduction to C++ programming) is a significant hurdle course for students and has been found to have a low pass rate. It has been well-documented that students struggle with this type of course as it introduces a number of new threshold concepts that can be challenging to grasp in a short time frame. In an attempt to mitigate this situation, a part-time night-school for Software Development I was introduced in 2015 as an intervention measure. The course includes all the course material from the Software Development I module and allows students who failed the course in first semester a second chance by repeating the course through taking the night-school course. The purpose of this study is to determine whether the introduction of this intervention course could be considered a success. The success of the intervention is assessed in two ways. The study will first look at whether the night-school course contributed to improving the pass rate of the Software Development I course. Secondly, the study will examine whether the intervention contributed to improving the overall throughput from the 2nd year to the 3rd year of study at a South African University. Second year academic results for a sample of 1216 students have been collected from 2010-2017. Preliminary results show that the lowest pass rate for Software Development I was found to be in 2017 with a pass rate of 34.9%. Since the intervention course's inception, the pass rate for Software Development I has increased each year from 2015-2017 by 13.75%, 25.53% and 25.81% respectively. To conclude, the preliminary results show that the intervention course is a success in improving the pass rate of Software Development I.Keywords: academic performance, electrical engineering, engineering education, intervention course, low pass rate, software development course, throughput
Procedia PDF Downloads 164111 Advancements in Mathematical Modeling and Optimization for Control, Signal Processing, and Energy Systems
Authors: Zahid Ullah, Atlas Khan
Abstract:
This abstract focuses on the advancements in mathematical modeling and optimization techniques that play a crucial role in enhancing the efficiency, reliability, and performance of these systems. In this era of rapidly evolving technology, mathematical modeling and optimization offer powerful tools to tackle the complex challenges faced by control, signal processing, and energy systems. This abstract presents the latest research and developments in mathematical methodologies, encompassing areas such as control theory, system identification, signal processing algorithms, and energy optimization. The abstract highlights the interdisciplinary nature of mathematical modeling and optimization, showcasing their applications in a wide range of domains, including power systems, communication networks, industrial automation, and renewable energy. It explores key mathematical techniques, such as linear and nonlinear programming, convex optimization, stochastic modeling, and numerical algorithms, that enable the design, analysis, and optimization of complex control and signal processing systems. Furthermore, the abstract emphasizes the importance of addressing real-world challenges in control, signal processing, and energy systems through innovative mathematical approaches. It discusses the integration of mathematical models with data-driven approaches, machine learning, and artificial intelligence to enhance system performance, adaptability, and decision-making capabilities. The abstract also underscores the significance of bridging the gap between theoretical advancements and practical applications. It recognizes the need for practical implementation of mathematical models and optimization algorithms in real-world systems, considering factors such as scalability, computational efficiency, and robustness. In summary, this abstract showcases the advancements in mathematical modeling and optimization techniques for control, signal processing, and energy systems. It highlights the interdisciplinary nature of these techniques, their applications across various domains, and their potential to address real-world challenges. The abstract emphasizes the importance of practical implementation and integration with emerging technologies to drive innovation and improve the performance of control, signal processing, and energy.Keywords: mathematical modeling, optimization, control systems, signal processing, energy systems, interdisciplinary applications, system identification, numerical algorithms
Procedia PDF Downloads 112110 Critical Success Factors Influencing Construction Project Performance for Different Objectives: Procurement Phase
Authors: Samart Homthong, Wutthipong Moungnoi
Abstract:
Critical success factors (CSFs) and the criteria to measure project success have received much attention over the decades and are among the most widely researched topics in the context of project management. However, although there have been extensive studies on the subject by different researchers, to date, there has been little agreement on the CSFs. The aim of this study is to identify the CSFs that influence the performance of construction projects, and determine their relative importance for different objectives across five stages in the project life cycle. A considerable literature review was conducted that resulted in the identification of 179 individual factors. These factors were then grouped into nine major categories. A questionnaire survey was used to collect data from three groups of respondents: client representatives, consultants, and contractors. Out of 164 questionnaires distributed, 93 were returned, yielding a response rate of 56.7%. Using the mean score, relative importance index, and weighted average method, the top 10 critical factors for each category were identified. The agreement of survey respondents on those categorised factors were analysed using Spearman’s rank correlation. A one-way analysis of variance was then performed to determine whether the mean scores among the various groups of respondents were statistically significant. The findings indicate the most CSFs in each category in procurement phase are: proper procurement programming of materials (time), stability in the price of materials (cost), and determining quality in the construction (quality). They are then followed by safety equipment acquisition and maintenance (health and safety), budgeting allowed in a contractual arrangement for implementing environmental management activities (environment), completeness of drawing documents (productivity), accurate measurement and pricing of bill of quantities (risk management), adequate communication among the project team (human resource), and adequate cost control measures (client satisfaction). An understanding of CSFs would help all interested parties in the construction industry to improve project performance. Furthermore, the results of this study would help construction professionals and practitioners take proactive measures for effective project management.Keywords: critical success factors, procurement phase, project life cycle, project performance
Procedia PDF Downloads 183109 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 297108 Comfort Sensor Using Fuzzy Logic and Arduino
Authors: Samuel John, S. Sharanya
Abstract:
Automation has become an important part of our life. It has been used to control home entertainment systems, changing the ambience of rooms for different events etc. One of the main parameters to control in a smart home is the atmospheric comfort. Atmospheric comfort mainly includes temperature and relative humidity. In homes, the desired temperature of different rooms varies from 20 °C to 25 °C and relative humidity is around 50%. However, it varies widely. Hence, automated measurement of these parameters to ensure comfort assumes significance. To achieve this, a fuzzy logic controller using Arduino was developed using MATLAB. Arduino is an open source hardware consisting of a 24 pin ATMEGA chip (atmega328), 14 digital input /output pins and an inbuilt ADC. It runs on 5v and 3.3v power supported by a board voltage regulator. Some of the digital pins in Aruduino provide PWM (pulse width modulation) signals, which can be used in different applications. The Arduino platform provides an integrated development environment, which includes support for c, c++ and java programming languages. In the present work, soft sensor was introduced in this system that can indirectly measure temperature and humidity and can be used for processing several measurements these to ensure comfort. The Sugeno method (output variables are functions or singleton/constant, more suitable for implementing on microcontrollers) was used in the soft sensor in MATLAB and then interfaced to the Arduino, which is again interfaced to the temperature and humidity sensor DHT11. The temperature-humidity sensor DHT11 acts as the sensing element in this system. Further, a capacitive humidity sensor and a thermistor were also used to support the measurement of temperature and relative humidity of the surrounding to provide a digital signal on the data pin. The comfort sensor developed was able to measure temperature and relative humidity correctly. The comfort percentage was calculated and accordingly the temperature in the room was controlled. This system was placed in different rooms of the house to ensure that it modifies the comfort values depending on temperature and relative humidity of the environment. Compared to the existing comfort control sensors, this system was found to provide an accurate comfort percentage. Depending on the comfort percentage, the air conditioners and the coolers in the room were controlled. The main highlight of the project is its cost efficiency.Keywords: arduino, DHT11, soft sensor, sugeno
Procedia PDF Downloads 312107 Fostering Organizational Learning across the Canadian Sport System through Leadership and Mentorship Development of Sport Science Leaders
Authors: Jennifer Walinga, Samantha Heron
Abstract:
The goal of the study was to inform the design of effective leadership and mentorship development programming for sport science leaders within the network of Canadian sport institutes and centers. The LEAD (Learn, Engage, Accelerate, Develop) program was implemented to equip sport science leaders with the leadership knowledge, skills, and practice to foster a high - performance culture, enhance the daily training environment, and contribute to optimal performance in sport. After two years of delivery, this analysis of LEAD’s effect on individual and organizational health and performance factors informs the quality of future deliveries and identifies best practice for leadership development across the Canadian sport system and beyond. A larger goal for this project was to inform the public sector more broadly and position sport as a source of best practice for human and social health, development, and performance. The objectives of this study were to review and refine the LEAD program in collaboration with Canadian Sport Institute and Centre leaders, 40-50 participants from three cohorts, and the LEAD program advisory committee, and to trace the effects of the LEAD leadership development program on key leadership mentorship and organizational health indicators across the Canadian sport institutes and centers so as to capture best practice. The study followed a participatory action research framework (PAR) using semi structured interviews with sport scientist participants, program and institute leaders inquiring into impact on specific individual and organizational health and performance factors. Findings included a strong increase in self-reported leadership knowledge, skill, language and confidence, enhancement of human and organizational health factors, and the opportunity to explore more deeply issues of diversity and inclusion, psychological safety, team dynamics, and performance management. The study was significant in building sport leadership and mentorship development strategies for managing change efforts, addressing inequalities, and building personal and operational resilience amidst challenges of uncertainty, pressure, and constraint in real time.Keywords: sport leadership, sport science leader, leadership development, professional development, sport education, mentorship
Procedia PDF Downloads 23106 Multilevel Modelling of Modern Contraceptive Use in Nigeria: Analysis of the 2013 NDHS
Authors: Akiode Ayobami, Akiode Akinsewa, Odeku Mojisola, Salako Busola, Odutolu Omobola, Nuhu Khadija
Abstract:
Purpose: Evidence exists that family planning use can contribute to reduction in infant and maternal mortality in any country. Despite these benefits, contraceptive use in Nigeria still remains very low, only 10% among married women. Understanding factors that predict contraceptive use is very important in order to improve the situation. In this paper, we analysed data from the 2013 Nigerian Demographic and Health Survey (NDHS) to better understand predictors of contraceptive use in Nigeria. The use of logistics regression and other traditional models in this type of situation is not appropriate as they do not account for social structure influence brought about by the hierarchical nature of the data on response variable. We therefore used multilevel modelling to explore the determinants of contraceptive use in order to account for the significant variation in modern contraceptive use by socio-demographic, and other proximate variables across the different Nigerian states. Method: This data has a two-level hierarchical structure. We considered the data of 26, 403 married women of reproductive age at level 1 and nested them within the 36 states and the Federal Capital Territory, Abuja at level 2. We modelled use of modern contraceptive against demographic variables, being told about FP at health facility, heard of FP on TV, Magazine or radio, husband desire for more children nested within the state. Results: Our results showed that the independent variables in the model were significant predictors of modern contraceptive use. The estimated variance component for the null model, random intercept, and random slope models were significant (p=0.00), indicating that the variation in contraceptive use across the Nigerian states is significant, and needs to be accounted for in order to accurately determine the predictors of contraceptive use, hence the data is best fitted by the multilevel model. Only being told about family planning at the health facility and religion have a significant random effect, implying that their predictability of contraceptive use varies across the states. Conclusion and Recommendation: Results showed that providing FP information at the health facility and religion needs to be considered when programming to improve contraceptive use at the state levels.Keywords: multilevel modelling, family planning, predictors, Nigeria
Procedia PDF Downloads 419105 Smart Defect Detection in XLPE Cables Using Convolutional Neural Networks
Authors: Tesfaye Mengistu
Abstract:
Power cables play a crucial role in the transmission and distribution of electrical energy. As the electricity generation, transmission, distribution, and storage systems become smarter, there is a growing emphasis on incorporating intelligent approaches to ensure the reliability of power cables. Various types of electrical cables are employed for transmitting and distributing electrical energy, with cross-linked polyethylene (XLPE) cables being widely utilized due to their exceptional electrical and mechanical properties. However, insulation defects can occur in XLPE cables due to subpar manufacturing techniques during production and cable joint installation. To address this issue, experts have proposed different methods for monitoring XLPE cables. Some suggest the use of interdigital capacitive (IDC) technology for online monitoring, while others propose employing continuous wave (CW) terahertz (THz) imaging systems to detect internal defects in XLPE plates used for power cable insulation. In this study, we have developed models that employ a custom dataset collected locally to classify the physical safety status of individual power cables. Our models aim to replace physical inspections with computer vision and image processing techniques to classify defective power cables from non-defective ones. The implementation of our project utilized the Python programming language along with the TensorFlow package and a convolutional neural network (CNN). The CNN-based algorithm was specifically chosen for power cable defect classification. The results of our project demonstrate the effectiveness of CNNs in accurately classifying power cable defects. We recommend the utilization of similar or additional datasets to further enhance and refine our models. Additionally, we believe that our models could be used to develop methodologies for detecting power cable defects from live video feeds. We firmly believe that our work makes a significant contribution to the field of power cable inspection and maintenance. Our models offer a more efficient and cost-effective approach to detecting power cable defects, thereby improving the reliability and safety of power grids.Keywords: artificial intelligence, computer vision, defect detection, convolutional neural net
Procedia PDF Downloads 112104 Family Planning and HIV Integration: A One-stop Shop Model at Spilhaus Clinic, Harare Zimbabwe
Authors: Mercy Marimirofa, Farai Machinga, Alfred Zvoushe, Tsitsidzaishe Musvosvi
Abstract:
The Government of Zimbabwe embarked on integrating family planning with Sexually Transmitted Infection (STI) and Human Immunodeficiency Virus (HIV) services in May 2020 with support from the World Health Organization (WHO). There was high HIV prevalence, incidence rates and STI infections among women attending FP clinics. Spilhaus is a specialized center of excellence clinic which offers a range of sexual reproductive health services. HIV services were limited to testing only, and clients were referred to other facilities for further management. Integration of services requires that all the services be available at one point so that clients will access them during their visit to the facility. Objectives: The study was conducted to assess the impact the one-stop-shop model has made in accessing integrated Family Planning services and sexual reproductive health services compared to the supermarket approach. It also assessed the relationship family planning services have with other sexual reproductive health services. Methods: A secondary data analysis was conducted at Spilhaus clinic in Harare using family planning registers and HIV services registers comparing years 2019 and 2021. A 2 sample t-test was used to determine the difference in clients accessing the services under the two models. A Spearman’s rank correlation was used to determine if accessing family planning services has a relationship with other sexual reproductive health services. Results: In 2019, 7,548 clients visited the Spilhaus clinic compared to 8,265 during the period January to December 2021. The median age for all clients accessing services was 32 years. An increase of 69% in the number of services accessed was recorded from 2019 to 2021. More services were accessed in 2021. There was no difference in the number of clients accessing family planning services cervical cancer, and HIV services. A difference was found in the number of clients who were offered STI screening services. There was also a relationship between accessing family planning services and STI screening services (ρ = 0.729, p-value=0.006). Conclusion: Programming towards SRH services was a great achievement, the use of an integrated approach proved to be cost-effective as it minimised the required resources for separate programs. Clients accessed important health needs at once. The integration of these services provided an opportunity to offer comprehensive information which addressed an individual’s sexual reproductive health needs.Keywords: intergration, one stop shop, family planning, reproductive health
Procedia PDF Downloads 68103 A Web-Based Systems Immunology Toolkit Allowing the Visualization and Comparative Analysis of Publically Available Collective Data to Decipher Immune Regulation in Early Life
Authors: Mahbuba Rahman, Sabri Boughorbel, Scott Presnell, Charlie Quinn, Darawan Rinchai, Damien Chaussabel, Nico Marr
Abstract:
Collections of large-scale datasets made available in public repositories can be used to identify and fill gaps in biomedical knowledge. But first, these data need to be made readily accessible to researchers for analysis and interpretation. Here a collection of transcriptome datasets was made available to investigate the functional programming of human hematopoietic cells in early life. Thirty two datasets were retrieved from the NCBI Gene Expression Omnibus (GEO) and loaded in a custom, interactive web application called the Gene Expression browser (GXB), designed for visualization and query of integrated large-scale data. Multiple sample groupings and gene rank lists were created based on the study design and variables in each dataset. Web links to customized graphical views can be generated by users and subsequently be used to graphically present data in manuscripts for publication. The GXB tool also enables browsing of a single gene across datasets, which can provide information on the role of a given molecule across biological systems. The dataset collection is available online. As a proof-of-principle, one of the datasets (GSE25087) was re-analyzed to identify genes that are differentially expressed by regulatory T cells in early life. Re-analysis of this dataset and a cross-study comparison using multiple other datasets in the above mentioned collection revealed that PMCH, a gene encoding a precursor of melanin-concentrating hormone (MCH), a cyclic neuropeptide, is highly expressed in a variety of other hematopoietic cell types, including neonatal erythroid cells as well as plasmacytoid dendritic cells upon viral infection. Our findings suggest an as yet unrecognized role of MCH in immune regulation, thereby highlighting the unique potential of the curated dataset collection and systems biology approach to generate new hypotheses which can be tested in future mechanistic studies.Keywords: early-life, GEO datasets, PMCH, interactive query, systems biology
Procedia PDF Downloads 296102 Improving the Constructability of Highway Design Plans
Authors: R. Edward Minchin Jr.
Abstract:
The U.S. Federal Highway Administration (FHWA) Every Day Counts Program (EDC) has resulted in state DOTs putting evermore emphasis on speeding up the delivery of highway and bridge construction projects for use by the driving public. This has resulted in an increase in the use of alternative construction delivery systems such as design-build (D-B), construction manager at-risk (CMR) or construction manager/general contractor (CM/GC), and adding alternative technical concepts (ATCs) to traditional design-bid-build (DBB) contracts. ATCs have exhibited great potential for delivering substantial benefits like cost savings, increased constructability, and quicker project delivery. Previous research has found that knowledge of project constructability was lacking in state Department of Transportation (DOT) planning, programming, and environmental staffs. Many agencies have therefore relied on a set of ‘acceptable’ design solutions over the years of working with their local resource agencies. The result is that the permitting process for several government agencies has become increasingly restrictive with the result that the DOTs and their industry partners lose the ability to innovate after a permit is approved. The intent of this paper is to report on the research team’s progress in this ongoing effort furnish the United States government with a uniform set of guidelines for the application of constructability reviews during all phases of project development and delivery. The research uses surveys and interviews to determine which states have implemented formal programs to ensure that the constructor is furnished with a set of contract documents that affords said constructor with the best possible opportunity to successfully construct the project with the highest quality standards, within the contract duration and without exceeding the construction budget. Once these states are identified, workshops are held all over the nation, resulting in the team learning the best current practices and giving the team the ability to recommend new practices that will improve the process. The plan is for the FHWA to encourage or require state DOTs to use these practices on all federally funded highway and bridge construction projects. The project deliverable is a Guidebook for FHWA to use in disseminating the recommended practices to the states.Keywords: alternative construction delivery, alternative technical concepts, constructability, construction design plans
Procedia PDF Downloads 216101 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 118100 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm
Authors: Zachary Huffman, Joana Rocha
Abstract:
Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations
Procedia PDF Downloads 13599 The Effect of Green Power Trading Mechanism on Interregional Power Generation and Transmission in China
Authors: Yan-Shen Yang, Bai-Chen Xie
Abstract:
Background and significance of the study: Both green power trading schemes and interregional power transmission are effective ways to increase green power absorption and achieve renewable power development goals. China accelerates the construction of interregional power transmission lines and the green power market. A critical issue focusing on the close interaction between these two approaches arises, which can heavily affect the green power quota allocation and renewable power development. Existing studies have not discussed this issue adequately, so it is urgent to figure out their relationship to achieve a suitable power market design and a more reasonable power grid construction.Basic methodologies: We develop an equilibrium model of the power market in China to analyze the coupling effect of these two approaches as well as their influence on power generation and interregional transmission in China. Our model considers both the Tradable green certificate (TGC) and green power market, which consists of producers, consumers in the market, and an independent system operator (ISO) minimizing the total system cost. Our equilibrium model includes the decision optimization process of each participant. To reformulate the models presented as a single-level one, we replace the producer, consumer, ISO, and market equilibrium problems with their Karush-Kuhn-Tucker (KKT) conditions, which is further reformulated as a mixed-integer linear programming (MILP) and solved in Gurobi solver. Major findings: The result shows that: (1) the green power market can significantly promote renewable power absorption while the TGC market provides a more flexible way for green power trading. (2) The phenomena of inefficient occupation and no available transmission lines appear simultaneously. The existing interregional transmission lines cannot fully meet the demand for wind and solar PV power trading in some areas while the situation is vice versa in other areas. (3) Synchronous implementation of green power and TGC trading mechanism can benefit the development of green power as well as interregional power transmission. (4) The green power transaction exacerbates the unfair distribution of carbon emissions. The Carbon Gini Coefficient is up to 0.323 under the green power market which shows a high Carbon inequality. The eastern coastal region will benefit the most due to its huge demand for external power.Keywords: green power market, tradable green certificate, interregional power transmission, power market equilibrium model
Procedia PDF Downloads 14798 Inverterless Grid Compatible Micro Turbine Generator
Authors: S. Ozeri, D. Shmilovitz
Abstract:
Micro‐Turbine Generators (MTG) are small size power plants that consist of a high speed, gas turbine driving an electrical generator. MTGs may be fueled by either natural gas or kerosene and may also use sustainable and recycled green fuels such as biomass, landfill or digester gas. The typical ratings of MTGs start from 20 kW up to 200 kW. The primary use of MTGs is for backup for sensitive load sites such as hospitals, and they are also considered a feasible power source for Distributed Generation (DG) providing on-site generation in proximity to remote loads. The MTGs have the compressor, the turbine, and the electrical generator mounted on a single shaft. For this reason, the electrical energy is generated at high frequency and is incompatible with the power grid. Therefore, MTGs must contain, in addition, a power conditioning unit to generate an AC voltage at the grid frequency. Presently, this power conditioning unit consists of a rectifier followed by a DC/AC inverter, both rated at the full MTG’s power. The losses of the power conditioning unit account to some 3-5%. Moreover, the full-power processing stage is a bulky and costly piece of equipment that also lowers the overall system reliability. In this study, we propose a new type of power conditioning stage in which only a small fraction of the power is processed. A low power converter is used only to program the rotor current (i.e. the excitation current which is substantially lower). Thus, the MTG's output voltage is shaped to the desired amplitude and frequency by proper programming of the excitation current. The control is realized by causing the rotor current to track the electrical frequency (which is related to the shaft frequency) with a difference that is exactly equal to the line frequency. Since the phasor of the rotation speed and the phasor of the rotor magnetic field are multiplied, the spectrum of the MTG generator voltage contains the sum and the difference components. The desired difference component is at the line frequency (50/60 Hz), whereas the unwanted sum component is at about twice the electrical frequency of the stator. The unwanted high frequency component can be filtered out by a low-pass filter leaving only the low-frequency output. This approach allows elimination of the large power conditioning unit incorporated in conventional MTGs. Instead, a much smaller and cheaper fractional power stage can be used. The proposed technology is also applicable to other high rotation generator sets such as aircraft power units.Keywords: gas turbine, inverter, power multiplier, distributed generation
Procedia PDF Downloads 23897 Pathway to Sustainable Shipping: Electric Ships
Authors: Wei Wang, Yannick Liu, Lu Zhen, H. Wang
Abstract:
Maritime transport plays an important role in global economic development but also inevitably faces increasing pressures from all sides, such as ship operating cost reduction and environmental protection. An ideal innovation to address these pressures is electric ships. The electric ship is in the early stage. Considering the special characteristics of electric ships, i.e., travel range limit, to guarantee the efficient operation of electric ships, the service network needs to be re-designed carefully. This research designs a cost-efficient and environmentally friendly service network for electric ships, including the location of charging stations, charging plan, route planning, ship scheduling, and ship deployment. The problem is formulated as a mixed-integer linear programming model with the objective of minimizing total cost comprised of charging cost, the construction cost of charging stations, and fixed cost of ships. A case study using data of the shipping network along the Yangtze River is conducted to evaluate the performance of the model. Two operating scenarios are used: an electric ship scenario where all the transportation tasks are fulfilled by electric ships and a conventional ship scenario where all the transportation tasks are fulfilled by fuel oil ships. Results unveil that the total cost of using electric ships is only 42.8% of using conventional ships. Using electric ships can reduce 80% SOx, 93.47% NOx, 89.47% PM, and 42.62% CO2, but will consume 2.78% more time to fulfill all the transportation tasks. Extensive sensitivity analyses are also conducted for key operating factors, including battery capacity, charging speed, volume capacity, and a service time limit of transportation task. Implications from the results are as follows: 1) it is necessary to equip the ship with a large capacity battery when the number of charging stations is low; 2) battery capacity will influence the number of ships deployed on each route; 3) increasing battery capacity will make the electric ship more cost-effective; 4) charging speed does not affect charging amount and location of charging station, but will influence the schedule of ships on each route; 5) there exists an optimal volume capacity, at which all costs and total delivery time are lowest; 6) service time limit will influence ship schedule and ship cost.Keywords: cost reduction, electric ship, environmental protection, sustainable shipping
Procedia PDF Downloads 7896 Logistics and Supply Chain Management Using Smart Contracts on Blockchain
Authors: Armen Grigoryan, Milena Arakelyan
Abstract:
The idea of smart logistics is still quite a complicated one. It can be used to market products to a large number of customers or to acquire raw materials of the highest quality at the lowest cost in geographically dispersed areas. The use of smart contracts in logistics and supply chain management has the potential to revolutionize the way that goods are tracked, transported, and managed. Smart contracts are simply computer programs written in one of the blockchain programming languages (Solidity, Rust, Vyper), which are capable of self-execution once the predetermined conditions are met. They can be used to automate and streamline many of the traditional manual processes that are currently used in logistics and supply chain management, including the tracking and movement of goods, the management of inventory, and the facilitation of payments and settlements between different parties in the supply chain. Currently, logistics is a core area for companies which is concerned with transporting products between parties. Still, the problem of this sector is that its scale may lead to detainments and defaults in the delivery of goods, as well as other issues. Moreover, large distributors require a large number of workers to meet all the needs of their stores. All this may contribute to big detainments in order processing and increases the potentiality of losing orders. In an attempt to break this problem, companies have automated all their procedures, contributing to a significant augmentation in the number of businesses and distributors in the logistics sector. Hence, blockchain technology and smart contracted legal agreements seem to be suitable concepts to redesign and optimize collaborative business processes and supply chains. The main purpose of this paper is to examine the scope of blockchain technology and smart contracts in the field of logistics and supply chain management. This study discusses the research question of how and to which extent smart contracts and blockchain technology can facilitate and improve the implementation of collaborative business structures for sustainable entrepreneurial activities in smart supply chains. The intention is to provide a comprehensive overview of the existing research on the use of smart contracts in logistics and supply chain management and to identify any gaps or limitations in the current knowledge on this topic. This review aims to provide a summary and evaluation of the key findings and themes that emerge from the research, as well as to suggest potential directions for future research on the use of smart contracts in logistics and supply chain management.Keywords: smart contracts, smart logistics, smart supply chain management, blockchain and smart contracts in logistics, smart contracts for controlling supply chain management
Procedia PDF Downloads 9695 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies
Authors: Paolino Di Felice
Abstract:
The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.Keywords: quality of life, distance measurement error, Italian administrative units, spatial database
Procedia PDF Downloads 37194 Mural Exhibition as a Promotive Strategy to Proper Hygiene and Sanitation Practices among Children: A Case Study from Urban Slum Schools in Nairobi, Kenya
Authors: Abdulaziz Kikanga, Kellen Muchira, Styvers Kathuni, Paul Saitoti
Abstract:
Background: Provision of adequate levels of water, sanitation, and hygiene in schools is a strategic objective in achieving universal primary education among children in low and middle-income countries. However, lack of proper sanitation and hygiene practices in schools, especially those in informal settlement has resulted to an increased rate of school absenteeism thereby affecting the education and health outcomes of the children in those setting. Intervention or Response: Catholic Relief Services in Kenya supports five schools in informal settlements of Nairobi by painting of key hygiene messages on school walls to promote proper hygiene and sanitation practices among the school children. The mural exhibitions depict the essence of proper hygiene practices, proper latrine use, and hand washing after visiting the latrine. The artwork is context specific and its aimed at improving the uptake of proper hygiene and sanitation practices among the school children. Review of project related documents was conducted including interviews with the school children. Thematic analysis was used to interpret the qualitative information generated. Results and Lessons Learnt: 12 school children have interviewed on proper hygiene and sanitation practices and the exercise revealed that painted murals were the best communication platforms for creating awareness on proper sanitation on issues relating to water, sanitation, and hygiene in schools. The painting mural provided a strong knowledge base for the formation of healthy habits in both the school and informal settlement. In addition, these sanitation messages on the school walls empower the children to share these practices with their siblings, parents, and other family members thereby acting as agents of change to proper hygiene and sanitation in those informal settlements. The findings revealed that by adopting proper sanitation and hygiene practices, there has been a reduction of school absenteeism due to a decrease in disease related to inadequate sanitation and hygiene in schools. Conclusion: The adoption of proper sanitation in schools entails more than just a painted mural wall. Insights revealed that to have a lasting sanitation and hygiene intervention, there is a need to invest in effective hygiene educational programming that encourages the formation of proper hygiene habits and promotes changes in behavior.Keywords: education outcomes, informal settlement, mural exhibition, school hygiene and sanitation
Procedia PDF Downloads 254