Search results for: quantum programming
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1461

Search results for: quantum programming

171 Family Planning and HIV Integration: A One-stop Shop Model at Spilhaus Clinic, Harare Zimbabwe

Authors: Mercy Marimirofa, Farai Machinga, Alfred Zvoushe, Tsitsidzaishe Musvosvi

Abstract:

The Government of Zimbabwe embarked on integrating family planning with Sexually Transmitted Infection (STI) and Human Immunodeficiency Virus (HIV) services in May 2020 with support from the World Health Organization (WHO). There was high HIV prevalence, incidence rates and STI infections among women attending FP clinics. Spilhaus is a specialized center of excellence clinic which offers a range of sexual reproductive health services. HIV services were limited to testing only, and clients were referred to other facilities for further management. Integration of services requires that all the services be available at one point so that clients will access them during their visit to the facility. Objectives: The study was conducted to assess the impact the one-stop-shop model has made in accessing integrated Family Planning services and sexual reproductive health services compared to the supermarket approach. It also assessed the relationship family planning services have with other sexual reproductive health services. Methods: A secondary data analysis was conducted at Spilhaus clinic in Harare using family planning registers and HIV services registers comparing years 2019 and 2021. A 2 sample t-test was used to determine the difference in clients accessing the services under the two models. A Spearman’s rank correlation was used to determine if accessing family planning services has a relationship with other sexual reproductive health services. Results: In 2019, 7,548 clients visited the Spilhaus clinic compared to 8,265 during the period January to December 2021. The median age for all clients accessing services was 32 years. An increase of 69% in the number of services accessed was recorded from 2019 to 2021. More services were accessed in 2021. There was no difference in the number of clients accessing family planning services cervical cancer, and HIV services. A difference was found in the number of clients who were offered STI screening services. There was also a relationship between accessing family planning services and STI screening services (ρ = 0.729, p-value=0.006). Conclusion: Programming towards SRH services was a great achievement, the use of an integrated approach proved to be cost-effective as it minimised the required resources for separate programs. Clients accessed important health needs at once. The integration of these services provided an opportunity to offer comprehensive information which addressed an individual’s sexual reproductive health needs.

Keywords: intergration, one stop shop, family planning, reproductive health

Procedia PDF Downloads 35
170 Unveiling the Detailed Turn Off-On Mechanism of Carbon Dots to Different Sized MnO₂ Nanosensor for Selective Detection of Glutathione

Authors: Neeraj Neeraj, Soumen Basu, Banibrata Maity

Abstract:

Glutathione (GSH) is one of the most important biomolecules having small molecular weight, which helps in various cellular functions like regulation of gene, xenobiotic metabolism, preservation of intracellular redox activities, signal transduction, etc. Therefore, the detection of GSH requires huge attention by using extremely selective and sensitive techniques. Herein, a rapid fluorometric nanosensor is designed by combining carbon dots (Cdots) and MnO₂ nanoparticles of different sizes for the detection of GSH. The bottom-up approach, i.e., microwave method, was used for the preparation of the water soluble and greatly fluorescent Cdots by using ascorbic acid as a precursor. MnO₂ nanospheres of different sizes (large, medium, and small) were prepared by varying the ratio of concentration of methionine and KMnO₄ at room temperature, which was confirmed by HRTEM analysis. The successive addition of MnO₂ nanospheres in Cdots results fluorescence quenching. From the fluorescence intensity data, Stern-Volmer quenching constant values (KS-V) were evaluated. From the fluorescence intensity and lifetime analysis, it was found that the degree of fluorescence quenching of Cdots followed the order: large > medium > small. Moreover, fluorescence recovery studies were also performed in the presence of GSH. Fluorescence restoration studies also show the order of turn on follows the same order, i.e., large > medium > small, which was also confirmed by quantum yield and lifetime studies. The limits of detection (LOD) of GSH in presence of Cdots@different sized MnO₂ nanospheres were also evaluated. It was observed thatLOD values were in μM region and lowest in case of large MnO₂ nanospheres. The separation distance (d) between Cdots and the surface of different MnO₂ nanospheres was determined. The d values increase with increase in the size of the MnO₂ nanospheres. In summary, the synthesized Cdots@MnO₂ nanocomposites acted as a rapid, simple, economical as well as environmental-friendly nanosensor for the detection of GSH.

Keywords: carbon dots, fluorescence, glutathione, MnO₂ nanospheres, turn off-on

Procedia PDF Downloads 122
169 Photophysical Study of Pyrene Butyric Acid in Aqueous Ionic Liquid

Authors: Pratap K. Chhotaray, Jitendriya Swain, Ashok Mishra, Ramesh L. Gardas

Abstract:

Ionic liquids (ILs) are molten salts, consist predominantly of ions and found to be liquid below 100°C. The unparalleled growing interest in ILs is based upon their never ending design flexibility. The use of ILs as a co-solvent in binary as well as a ternary mixture with molecular solvents multifold it’s utility. Since polarity is one of the most widely applied solvent concepts which represents simple and straightforward means for characterizing and ranking the solvent media, its study for a binary mixture of ILs is crucial for its widespread application and development. The primary approach to the assessment of solution phase intermolecular interactions, which generally occurs on the picosecond to nanosecond time scales, is to exploit the optical response of photophysical probe. Pyrene butyric acid (PBA) is used as fluorescence probe due to its high quantum yield, longer lifetime and high solvent polarity dependence of fluorescence spectra. Propylammonium formate (PAF) is the IL used for this study. Both the UV-absorbance spectra and steady state fluorescence intensity study of PBA in different concentration of aqueous PAF, reveals that with an increase in PAF concentration, both the absorbance and fluorescence intensity increases which indicate the progressive solubilisation of PBA. Whereas, near about 50% of IL concentration, all of the PBA molecules get solubilised as there are no changes in the absorbance and fluorescence intensity. Furthermore, the ratio II/IV, where the band II corresponds to the transition from S1 (ν = 0) to S0 (ν = 0), and the band IV corresponds to transition from S1 (ν = 0) to S0 (ν = 2) of PBA, indicates that the addition of water into PAF increases the polarity of the medium. Time domain lifetime study shows an increase in lifetime of PBA towards the higher concentration of PAF. It can be attributed to the decrease in non-radiative rate constant at higher PAF concentration as the viscosity is higher. The monoexponential decay suggests that homogeneity of solvation environment whereas the uneven width at full width at half maximum (FWHM) indicates there might exist some heterogeneity around the fluorophores even in the water-IL mixed solvents.

Keywords: fluorescence, ionic liquid, lifetime, polarity, pyrene butyric acid

Procedia PDF Downloads 430
168 A Web-Based Systems Immunology Toolkit Allowing the Visualization and Comparative Analysis of Publically Available Collective Data to Decipher Immune Regulation in Early Life

Authors: Mahbuba Rahman, Sabri Boughorbel, Scott Presnell, Charlie Quinn, Darawan Rinchai, Damien Chaussabel, Nico Marr

Abstract:

Collections of large-scale datasets made available in public repositories can be used to identify and fill gaps in biomedical knowledge. But first, these data need to be made readily accessible to researchers for analysis and interpretation. Here a collection of transcriptome datasets was made available to investigate the functional programming of human hematopoietic cells in early life. Thirty two datasets were retrieved from the NCBI Gene Expression Omnibus (GEO) and loaded in a custom, interactive web application called the Gene Expression browser (GXB), designed for visualization and query of integrated large-scale data. Multiple sample groupings and gene rank lists were created based on the study design and variables in each dataset. Web links to customized graphical views can be generated by users and subsequently be used to graphically present data in manuscripts for publication. The GXB tool also enables browsing of a single gene across datasets, which can provide information on the role of a given molecule across biological systems. The dataset collection is available online. As a proof-of-principle, one of the datasets (GSE25087) was re-analyzed to identify genes that are differentially expressed by regulatory T cells in early life. Re-analysis of this dataset and a cross-study comparison using multiple other datasets in the above mentioned collection revealed that PMCH, a gene encoding a precursor of melanin-concentrating hormone (MCH), a cyclic neuropeptide, is highly expressed in a variety of other hematopoietic cell types, including neonatal erythroid cells as well as plasmacytoid dendritic cells upon viral infection. Our findings suggest an as yet unrecognized role of MCH in immune regulation, thereby highlighting the unique potential of the curated dataset collection and systems biology approach to generate new hypotheses which can be tested in future mechanistic studies.

Keywords: early-life, GEO datasets, PMCH, interactive query, systems biology

Procedia PDF Downloads 269
167 Improving the Constructability of Highway Design Plans

Authors: R. Edward Minchin Jr.

Abstract:

The U.S. Federal Highway Administration (FHWA) Every Day Counts Program (EDC) has resulted in state DOTs putting evermore emphasis on speeding up the delivery of highway and bridge construction projects for use by the driving public. This has resulted in an increase in the use of alternative construction delivery systems such as design-build (D-B), construction manager at-risk (CMR) or construction manager/general contractor (CM/GC), and adding alternative technical concepts (ATCs) to traditional design-bid-build (DBB) contracts. ATCs have exhibited great potential for delivering substantial benefits like cost savings, increased constructability, and quicker project delivery. Previous research has found that knowledge of project constructability was lacking in state Department of Transportation (DOT) planning, programming, and environmental staffs. Many agencies have therefore relied on a set of ‘acceptable’ design solutions over the years of working with their local resource agencies. The result is that the permitting process for several government agencies has become increasingly restrictive with the result that the DOTs and their industry partners lose the ability to innovate after a permit is approved. The intent of this paper is to report on the research team’s progress in this ongoing effort furnish the United States government with a uniform set of guidelines for the application of constructability reviews during all phases of project development and delivery. The research uses surveys and interviews to determine which states have implemented formal programs to ensure that the constructor is furnished with a set of contract documents that affords said constructor with the best possible opportunity to successfully construct the project with the highest quality standards, within the contract duration and without exceeding the construction budget. Once these states are identified, workshops are held all over the nation, resulting in the team learning the best current practices and giving the team the ability to recommend new practices that will improve the process. The plan is for the FHWA to encourage or require state DOTs to use these practices on all federally funded highway and bridge construction projects. The project deliverable is a Guidebook for FHWA to use in disseminating the recommended practices to the states.

Keywords: alternative construction delivery, alternative technical concepts, constructability, construction design plans

Procedia PDF Downloads 178
166 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing

Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou

Abstract:

The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.

Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation

Procedia PDF Downloads 84
165 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 115
164 Prediction of the Dark Matter Distribution and Fraction in Individual Galaxies Based Solely on Their Rotation Curves

Authors: Ramzi Suleiman

Abstract:

Recently, the author proposed an observationally-based relativity theory termed information relativity theory (IRT). The theory is simple and is based only on basic principles, with no prior axioms and no free parameters. For the case of a body of mass in uniform rectilinear motion relative to an observer, the theory transformations uncovered a matter-dark matter duality, which prescribes that the sum of the densities of the body's baryonic matter and dark matter, as measured by the observer, is equal to the body's matter density at rest. It was shown that the theory transformations were successful in predicting several important phenomena in small particle physics, quantum physics, and cosmology. This paper extends the theory transformations to the cases of rotating disks and spheres. The resulting transformations for a rotating disk are utilized to derive predictions of the radial distributions of matter and dark matter densities in rotationally supported galaxies based solely on their observed rotation curves. It is also shown that for galaxies with flattening curves, good approximations of the radial distributions of matter and dark matter and of the dark matter fraction could be obtained from one measurable scale radius. Test of the model on five galaxies, chosen randomly from the SPARC database, yielded impressive predictions. The rotation curves of all the investigated galaxies emerged as accurate traces of the predicted radial density distributions of their dark matter. This striking result raises an intriguing physical explanation of gravity in galaxies, according to which it is the proximal drag of the stars and gas in the galaxy by its rotating dark matter web. We conclude by alluding briefly to the application of the proposed model to stellar systems and black holes. This study also hints at the potential of the discovered matter-dark matter duality in fixing the standard model of elementary particles in a natural manner without the need for hypothesizing about supersymmetric particles.

Keywords: dark matter, galaxies rotation curves, SPARC, rotating disk

Procedia PDF Downloads 50
163 The Effect of Green Power Trading Mechanism on Interregional Power Generation and Transmission in China

Authors: Yan-Shen Yang, Bai-Chen Xie

Abstract:

Background and significance of the study: Both green power trading schemes and interregional power transmission are effective ways to increase green power absorption and achieve renewable power development goals. China accelerates the construction of interregional power transmission lines and the green power market. A critical issue focusing on the close interaction between these two approaches arises, which can heavily affect the green power quota allocation and renewable power development. Existing studies have not discussed this issue adequately, so it is urgent to figure out their relationship to achieve a suitable power market design and a more reasonable power grid construction.Basic methodologies: We develop an equilibrium model of the power market in China to analyze the coupling effect of these two approaches as well as their influence on power generation and interregional transmission in China. Our model considers both the Tradable green certificate (TGC) and green power market, which consists of producers, consumers in the market, and an independent system operator (ISO) minimizing the total system cost. Our equilibrium model includes the decision optimization process of each participant. To reformulate the models presented as a single-level one, we replace the producer, consumer, ISO, and market equilibrium problems with their Karush-Kuhn-Tucker (KKT) conditions, which is further reformulated as a mixed-integer linear programming (MILP) and solved in Gurobi solver. Major findings: The result shows that: (1) the green power market can significantly promote renewable power absorption while the TGC market provides a more flexible way for green power trading. (2) The phenomena of inefficient occupation and no available transmission lines appear simultaneously. The existing interregional transmission lines cannot fully meet the demand for wind and solar PV power trading in some areas while the situation is vice versa in other areas. (3) Synchronous implementation of green power and TGC trading mechanism can benefit the development of green power as well as interregional power transmission. (4) The green power transaction exacerbates the unfair distribution of carbon emissions. The Carbon Gini Coefficient is up to 0.323 under the green power market which shows a high Carbon inequality. The eastern coastal region will benefit the most due to its huge demand for external power.

Keywords: green power market, tradable green certificate, interregional power transmission, power market equilibrium model

Procedia PDF Downloads 103
162 Correlation Studies in Nutritional Intake, Health Status and Clinical Examination of Young Adult Girls

Authors: Sonal Tuljaram Kame

Abstract:

Growth and development is based on proper diet. A balanced diet contains all the nutrients in required quantum. Although physical growth is completed by young adulthood, the body tissues remain in a dynamic state with catabolism slightly exceeding anabolism, resulting in a net decrease in the number of cells. After the years of adolescence which cause upheavals in the life of the person, the individual struggle to emerge as an adult who know who he is and what his goals are. During this period nutrients are needed for maintaining the health and energy is required for physical functions and physical activities. The nutritional requirement in young adulthood differs from other periods of life. Iron is needed for haemoglobin synthesis and necessitates by the considerable examination of blood volume. Young adult girls need to ensure adequate intake of iron as they loose 0.5 mg/day by way of menstruation. This is complete awareness about nutritional and health on the other side there is widespread ignorance about nutrition and health among young adult girls. The young adult girls who are aware about nutrition and health seem to be very conscious about nutritional intake and health. Figure consciousness and fear of obesity leads to self imposed intake of nutrients. It may result in various health problems. The study was planned to investigate nutrient intake, find relation between nutritional intake, clinical examination score and health status of young adult girls. The present study is based on the data collected from 120 young adult girls studying in four different competitive exams coaching academies in Akola city of Maharashtra. It was found that nutritional intake of these young adult girls was below the recommended level, nutritional knowledge level and nutritional intake are associated attributes, calories, calcium and protein intake is positively correlated with clinical examination and health status. It was concluded that well planned nutritional counseling for the young adult girls can help prevent nutritional deficiency diseases and disorders which may lead to anaemic condition in young adult girls. Girls need to be educated on intake of iron and vitamin B12.

Keywords: nutritional intake, health status, young adult girls, correlation studies

Procedia PDF Downloads 344
161 Inverterless Grid Compatible Micro Turbine Generator

Authors: S. Ozeri, D. Shmilovitz

Abstract:

Micro‐Turbine Generators (MTG) are small size power plants that consist of a high speed, gas turbine driving an electrical generator. MTGs may be fueled by either natural gas or kerosene and may also use sustainable and recycled green fuels such as biomass, landfill or digester gas. The typical ratings of MTGs start from 20 kW up to 200 kW. The primary use of MTGs is for backup for sensitive load sites such as hospitals, and they are also considered a feasible power source for Distributed Generation (DG) providing on-site generation in proximity to remote loads. The MTGs have the compressor, the turbine, and the electrical generator mounted on a single shaft. For this reason, the electrical energy is generated at high frequency and is incompatible with the power grid. Therefore, MTGs must contain, in addition, a power conditioning unit to generate an AC voltage at the grid frequency. Presently, this power conditioning unit consists of a rectifier followed by a DC/AC inverter, both rated at the full MTG’s power. The losses of the power conditioning unit account to some 3-5%. Moreover, the full-power processing stage is a bulky and costly piece of equipment that also lowers the overall system reliability. In this study, we propose a new type of power conditioning stage in which only a small fraction of the power is processed. A low power converter is used only to program the rotor current (i.e. the excitation current which is substantially lower). Thus, the MTG's output voltage is shaped to the desired amplitude and frequency by proper programming of the excitation current. The control is realized by causing the rotor current to track the electrical frequency (which is related to the shaft frequency) with a difference that is exactly equal to the line frequency. Since the phasor of the rotation speed and the phasor of the rotor magnetic field are multiplied, the spectrum of the MTG generator voltage contains the sum and the difference components. The desired difference component is at the line frequency (50/60 Hz), whereas the unwanted sum component is at about twice the electrical frequency of the stator. The unwanted high frequency component can be filtered out by a low-pass filter leaving only the low-frequency output. This approach allows elimination of the large power conditioning unit incorporated in conventional MTGs. Instead, a much smaller and cheaper fractional power stage can be used. The proposed technology is also applicable to other high rotation generator sets such as aircraft power units.

Keywords: gas turbine, inverter, power multiplier, distributed generation

Procedia PDF Downloads 212
160 Pathway to Sustainable Shipping: Electric Ships

Authors: Wei Wang, Yannick Liu, Lu Zhen, H. Wang

Abstract:

Maritime transport plays an important role in global economic development but also inevitably faces increasing pressures from all sides, such as ship operating cost reduction and environmental protection. An ideal innovation to address these pressures is electric ships. The electric ship is in the early stage. Considering the special characteristics of electric ships, i.e., travel range limit, to guarantee the efficient operation of electric ships, the service network needs to be re-designed carefully. This research designs a cost-efficient and environmentally friendly service network for electric ships, including the location of charging stations, charging plan, route planning, ship scheduling, and ship deployment. The problem is formulated as a mixed-integer linear programming model with the objective of minimizing total cost comprised of charging cost, the construction cost of charging stations, and fixed cost of ships. A case study using data of the shipping network along the Yangtze River is conducted to evaluate the performance of the model. Two operating scenarios are used: an electric ship scenario where all the transportation tasks are fulfilled by electric ships and a conventional ship scenario where all the transportation tasks are fulfilled by fuel oil ships. Results unveil that the total cost of using electric ships is only 42.8% of using conventional ships. Using electric ships can reduce 80% SOx, 93.47% NOx, 89.47% PM, and 42.62% CO2, but will consume 2.78% more time to fulfill all the transportation tasks. Extensive sensitivity analyses are also conducted for key operating factors, including battery capacity, charging speed, volume capacity, and a service time limit of transportation task. Implications from the results are as follows: 1) it is necessary to equip the ship with a large capacity battery when the number of charging stations is low; 2) battery capacity will influence the number of ships deployed on each route; 3) increasing battery capacity will make the electric ship more cost-effective; 4) charging speed does not affect charging amount and location of charging station, but will influence the schedule of ships on each route; 5) there exists an optimal volume capacity, at which all costs and total delivery time are lowest; 6) service time limit will influence ship schedule and ship cost.

Keywords: cost reduction, electric ship, environmental protection, sustainable shipping

Procedia PDF Downloads 57
159 Logistics and Supply Chain Management Using Smart Contracts on Blockchain

Authors: Armen Grigoryan, Milena Arakelyan

Abstract:

The idea of smart logistics is still quite a complicated one. It can be used to market products to a large number of customers or to acquire raw materials of the highest quality at the lowest cost in geographically dispersed areas. The use of smart contracts in logistics and supply chain management has the potential to revolutionize the way that goods are tracked, transported, and managed. Smart contracts are simply computer programs written in one of the blockchain programming languages (Solidity, Rust, Vyper), which are capable of self-execution once the predetermined conditions are met. They can be used to automate and streamline many of the traditional manual processes that are currently used in logistics and supply chain management, including the tracking and movement of goods, the management of inventory, and the facilitation of payments and settlements between different parties in the supply chain. Currently, logistics is a core area for companies which is concerned with transporting products between parties. Still, the problem of this sector is that its scale may lead to detainments and defaults in the delivery of goods, as well as other issues. Moreover, large distributors require a large number of workers to meet all the needs of their stores. All this may contribute to big detainments in order processing and increases the potentiality of losing orders. In an attempt to break this problem, companies have automated all their procedures, contributing to a significant augmentation in the number of businesses and distributors in the logistics sector. Hence, blockchain technology and smart contracted legal agreements seem to be suitable concepts to redesign and optimize collaborative business processes and supply chains. The main purpose of this paper is to examine the scope of blockchain technology and smart contracts in the field of logistics and supply chain management. This study discusses the research question of how and to which extent smart contracts and blockchain technology can facilitate and improve the implementation of collaborative business structures for sustainable entrepreneurial activities in smart supply chains. The intention is to provide a comprehensive overview of the existing research on the use of smart contracts in logistics and supply chain management and to identify any gaps or limitations in the current knowledge on this topic. This review aims to provide a summary and evaluation of the key findings and themes that emerge from the research, as well as to suggest potential directions for future research on the use of smart contracts in logistics and supply chain management.

Keywords: smart contracts, smart logistics, smart supply chain management, blockchain and smart contracts in logistics, smart contracts for controlling supply chain management

Procedia PDF Downloads 70
158 A Geo DataBase to Investigate the Maximum Distance Error in Quality of Life Studies

Authors: Paolino Di Felice

Abstract:

The background and significance of this study come from papers already appeared in the literature which measured the impact of public services (e.g., hospitals, schools, ...) on the citizens’ needs satisfaction (one of the dimensions of QOL studies) by calculating the distance between the place where they live and the location on the territory of the services. Those studies assume that the citizens' dwelling coincides with the centroid of the polygon that expresses the boundary of the administrative district, within the city, they belong to. Such an assumption “introduces a maximum measurement error equal to the greatest distance between the centroid and the border of the administrative district.”. The case study, this abstract reports about, investigates the implications descending from the adoption of such an approach but at geographical scales greater than the urban one, namely at the three levels of nesting of the Italian administrative units: the (20) regions, the (110) provinces, and the 8,094 municipalities. To carry out this study, it needs to be decided: a) how to store the huge amount of (spatial and descriptive) input data and b) how to process them. The latter aspect involves: b.1) the design of algorithms to investigate the geometry of the boundary of the Italian administrative units; b.2) their coding in a programming language; b.3) their execution and, eventually, b.4) archiving the results in a permanent support. The IT solution we implemented is centered around a (PostgreSQL/PostGIS) Geo DataBase structured in terms of three tables that fit well to the hierarchy of nesting of the Italian administrative units: municipality(id, name, provinceId, istatCode, regionId, geometry) province(id, name, regionId, geometry) region(id, name, geometry). The adoption of the DBMS technology allows us to implement the steps "a)" and "b)" easily. In particular, step "b)" is simplified dramatically by calling spatial operators and spatial built-in User Defined Functions within SQL queries against the Geo DB. The major findings coming from our experiments can be summarized as follows. The approximation that, on the average, descends from assimilating the residence of the citizens with the centroid of the administrative unit of reference is of few kilometers (4.9) at the municipalities level, while it becomes conspicuous at the other two levels (28.9 and 36.1, respectively). Therefore, studies such as those mentioned above can be extended up to the municipal level without affecting the correctness of the interpretation of the results, but not further. The IT framework implemented to carry out the experiments can be replicated for studies referring to the territory of other countries all over the world.

Keywords: quality of life, distance measurement error, Italian administrative units, spatial database

Procedia PDF Downloads 348
157 Mural Exhibition as a Promotive Strategy to Proper Hygiene and Sanitation Practices among Children: A Case Study from Urban Slum Schools in Nairobi, Kenya

Authors: Abdulaziz Kikanga, Kellen Muchira, Styvers Kathuni, Paul Saitoti

Abstract:

Background: Provision of adequate levels of water, sanitation, and hygiene in schools is a strategic objective in achieving universal primary education among children in low and middle-income countries. However, lack of proper sanitation and hygiene practices in schools, especially those in informal settlement has resulted to an increased rate of school absenteeism thereby affecting the education and health outcomes of the children in those setting. Intervention or Response: Catholic Relief Services in Kenya supports five schools in informal settlements of Nairobi by painting of key hygiene messages on school walls to promote proper hygiene and sanitation practices among the school children. The mural exhibitions depict the essence of proper hygiene practices, proper latrine use, and hand washing after visiting the latrine. The artwork is context specific and its aimed at improving the uptake of proper hygiene and sanitation practices among the school children. Review of project related documents was conducted including interviews with the school children. Thematic analysis was used to interpret the qualitative information generated. Results and Lessons Learnt: 12 school children have interviewed on proper hygiene and sanitation practices and the exercise revealed that painted murals were the best communication platforms for creating awareness on proper sanitation on issues relating to water, sanitation, and hygiene in schools. The painting mural provided a strong knowledge base for the formation of healthy habits in both the school and informal settlement. In addition, these sanitation messages on the school walls empower the children to share these practices with their siblings, parents, and other family members thereby acting as agents of change to proper hygiene and sanitation in those informal settlements. The findings revealed that by adopting proper sanitation and hygiene practices, there has been a reduction of school absenteeism due to a decrease in disease related to inadequate sanitation and hygiene in schools. Conclusion: The adoption of proper sanitation in schools entails more than just a painted mural wall. Insights revealed that to have a lasting sanitation and hygiene intervention, there is a need to invest in effective hygiene educational programming that encourages the formation of proper hygiene habits and promotes changes in behavior.

Keywords: education outcomes, informal settlement, mural exhibition, school hygiene and sanitation

Procedia PDF Downloads 218
156 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour

Authors: Libor Zachoval, Daire O Broin, Oisin Cawley

Abstract:

E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).

Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI

Procedia PDF Downloads 91
155 Comparison of Shell-Facemask Responses in American Football Helmets during NOCSAE Drop Tests

Authors: G. Alston Rush, Gus A. Rush III, M. F. Horstemeyer

Abstract:

This study compares the shell-facemask responses of four commonly used American football helmets, under the National Operating Committee on Standards for Athletic Equipment (NOCSAE) drop impact test method, to show that the test standard would more accurately simulate in-use conditions by modification to include the facemask. In our study, the need for a more vigorous systematic approach to football helmet testing procedures is emphasized by comparing the Head Injury Criterion (HIC), the Gadd Severity Index (SI), and peak acceleration values for different helmets at different locations on the helmet under modified NOCSAE standard drop tower tests. Drop tests were performed on the Rawlings Quantum Plus, Riddell 360, Schutt Ion 4D, and Xenith X2 helmets at eight impact locations, impact velocities of 5.46 and 4.88 meters per second, and helmet configurations with and without facemasks. Analysis of NOCSAE drop test results reveal significant differences (p < 0.05) for when the facemasks were attached to helmets, as compared to the NOCSAE Standard, without facemask configuration. The boundary conditions of the facemask attachment can have up to a 50% decrease (p < 0.001) in helmet performance with respect to peak acceleration. While generally, all helmets with the facemasks gave greater HIC, SI, and acceleration values than helmets without the facemasks, significant helmet dependent variations were observed across impact locations and impact velocities. The variations between helmet responses could be attributed to the unique design features of each helmet tested, which include different liners, chin strap attachments, and faceguard attachment systems. In summary, these comparative drop test results revealed that the current NOCSAE standard test methods need improvement by attaching the facemasks to helmets during testing. The modified NOCSAE football helmet standard test gives a more accurate representation of a helmet’s performance and its ability to mitigate the on-field impact.

Keywords: football helmet testing, gadd severity index, head injury criterion, mild traumatic brain injury

Procedia PDF Downloads 430
154 Urban Spatial Metamorphoses: The Case of Kazan City With Using GIS-Technologies

Authors: Irna Malganova

Abstract:

The paper assessed the effectiveness of the use of urban functional zoning using the method of M.A. Kramer by the example of Kazan city (Republic of Tatarstan, Russian Federation) using geoinformation technologies. On the basis of the data obtained, the calculations were carried out to obtain data on population density, overcoming geographic determinism, as well as the effectiveness of the formation of urban frameworks. The authors proposed recommendations for the effectiveness of municipal frameworks in the period from 2018 to 2021: economic, social, environmental and social. The study of effective territorial planning in a given period allows to display of the dynamics of planning changes, as well as assessment of changes in the formation of urban frameworks. Based on the incoming data obtained from the master plan of the municipal formation of Kazan, in the period from 2018 to 2021, there was an increase in population by 13841 people or 1.1% of the values of 2018. In addition, the area of Kazan increased by 2419.6 hectares. In the structure of the distribution of areas of functional zones, there was an increase in such zones of the municipality as zones of residential and public purpose. Changes in functional zoning, as well as territories requiring reorganization, are presented using geoinformation technologies in open-source software Quantum Geographic Information System (QGIS 3.32). According to the calculations based on the method of functional zoning efficiency by M.A. Kreimer, the territorial-planning structure of Kazan City is quite effective. However, in the development of spatial planning concepts, it is possible to emphasize the weakened interest of the population in the development of territorial planning documents. Thus, the approach to spatial planning of Kazan differs from foreign methods and approaches based on the joint development of planning directions and development of territories of municipalities between the developers of the planning structure, business representatives and the population. The population plays the role of the target audience on which territorial planning is oriented. It follows that there is a need to satisfy the opinions and demands of the population.

Keywords: spatial development, metamorphosis, Kazan city, spatial planning, efficiency, geographic determinism., GIS, QGIS

Procedia PDF Downloads 49
153 Specification and Unification of All Fundamental Forces Exist in Universe in the Theoretical Perspective – The Universal Mechanics

Authors: Surendra Mund

Abstract:

At the beginning, the physical entity force was defined mathematically by Sir Isaac Newton in his Principia Mathematica as F ⃗=(dp ⃗)/dt in form of his second law of motion. Newton also defines his Universal law of Gravitational force exist in same outstanding book, but at the end of 20th century and beginning of 21st century, we have tried a lot to specify and unify four or five Fundamental forces or Interaction exist in universe, but we failed every time. Usually, Gravity creates problems in this unification every single time, but in my previous papers and presentations, I defined and derived Field and force equations for Gravitational like Interactions for each and every kind of central systems. This force is named as Variational Force by me, and this force is generated by variation in the scalar field density around the body. In this particular paper, at first, I am specifying which type of Interactions are Fundamental in Universal sense (or in all type of central systems or bodies predicted by my N-time Inflationary Model of Universe) and then unify them in Universal framework (defined and derived by me as Universal Mechanics in a separate paper) as well. This will also be valid in Universal dynamical sense which includes inflations and deflations of universe, central system relativity, Universal relativity, ϕ-ψ transformation and transformation of spin, physical perception principle, Generalized Fundamental Dynamical Law and many other important Generalized Principles of Generalized Quantum Mechanics (GQM) and Central System Theory (CST). So, In this article, at first, I am Generalizing some Fundamental Principles, and then Unifying Variational Forces (General form of Gravitation like Interactions) and Flow Generated Force (General form of EM like Interactions), and then Unify all Fundamental Forces by specifying Weak and Strong Interactions in form of more basic terms - Variational, Flow Generated and Transformational Interactions.

Keywords: Central System Force, Disturbance Force, Flow Generated Forces, Generalized Nuclear Force, Generalized Weak Interactions, Generalized EM-Like Interactions, Imbalance Force, Spin Generated Forces, Transformation Generated Force, Unified Force, Universal Mechanics, Uniform And Non-Uniform Variational Interactions, Variational Interactions

Procedia PDF Downloads 27
152 Development of a Multi-User Country Specific Food Composition Table for Malawi

Authors: Averalda van Graan, Joelaine Chetty, Malory Links, Agness Mwangwela, Sitilitha Masangwi, Dalitso Chimwala, Shiban Ghosh, Elizabeth Marino-Costello

Abstract:

Food composition data is becoming increasingly important as dealing with food insecurity and malnutrition in its persistent form of under-nutrition is now coupled with increasing over-nutrition and its related ailments in the developing world, of which Malawi is not spared. In the absence of a food composition database (FCDB) inherent to our dietary patterns, efforts were made to develop a country-specific FCDB for nutrition practice, research, and programming. The main objective was to develop a multi-user, country-specific food composition database, and table from existing published and unpublished scientific literature. A multi-phased approach guided by the project framework was employed. Phase 1 comprised a scoping mission to assess the nutrition landscape for compilation activities. Phase 2 involved training of a compiler and data collection from various sources, primarily; institutional libraries, online databases, and food industry nutrient data. Phase 3 subsumed evaluation and compilation of data using FAO and IN FOODS standards and guidelines. Phase 4 concluded the process with quality assurance. 316 Malawian food items categorized into eight food groups for 42 components were captured. The majority were from the baby food group (27%), followed by a staple (22%) and animal (22%) food group. Fats and oils consisted the least number of food items (2%), followed by fruits (6%). Proximate values are well represented; however, the percent missing data is huge for some components, including Se 68%, I 75%, Vitamin A 42%, and lipid profile; saturated fat 53%, mono-saturated fat 59%, poly-saturated fat 59% and cholesterol 56%. A multi-phased approach following the project framework led to the development of the first Malawian FCDB and table. The table reflects inherent Malawian dietary patterns and nutritional concerns. The FCDB can be used by various professionals in nutrition and health. Rising over-nutrition, NCD, and changing diets challenge us for nutrient profiles of processed foods and complete lipid profiles.

Keywords: analytical data, dietary pattern, food composition data, multi-phased approach

Procedia PDF Downloads 64
151 Mathematics as the Foundation for the STEM Disciplines: Different Pedagogical Strategies Addressed

Authors: Marion G. Ben-Jacob, David Wang

Abstract:

There is a mathematics requirement for entry level college and university students, especially those who plan to study STEM (Science, Technology, Engineering and Mathematics). Most of them take College Algebra, and to continue their studies, they need to succeed in this course. Different pedagogical strategies are employed to promote the success of our students. There is, of course, the Traditional Method of teaching- lecture, examples, problems for students to solve. The Emporium Model, another pedagogical approach, replaces traditional lectures with a learning resource center model featuring interactive software and on-demand personalized assistance. This presentation will compare these two methods of pedagogy and the study done with its results on this comparison. Math is the foundation for science, technology, and engineering. Its work is generally used in STEM to find patterns in data. These patterns can be used to test relationships, draw general conclusions about data, and model the real world. In STEM, solutions to problems are analyzed, reasoned, and interpreted using math abilities in a assortment of real-world scenarios. This presentation will examine specific examples of how math is used in the different STEM disciplines. Math becomes practical in science when it is used to model natural and artificial experiments to identify a problem and develop a solution for it. As we analyze data, we are using math to find the statistical correlation between the cause of an effect. Scientists who use math include the following: data scientists, scientists, biologists and geologists. Without math, most technology would not be possible. Math is the basis of binary, and without programming, you just have the hardware. Addition, subtraction, multiplication, and division is also used in almost every program written. Mathematical algorithms are inherent in software as well. Mechanical engineers analyze scientific data to design robots by applying math and using the software. Electrical engineers use math to help design and test electrical equipment. They also use math when creating computer simulations and designing new products. Chemical engineers often use mathematics in the lab. Advanced computer software is used to aid in their research and production processes to model theoretical synthesis techniques and properties of chemical compounds. Mathematics mastery is crucial for success in the STEM disciplines. Pedagogical research on formative strategies and necessary topics to be covered are essential.

Keywords: emporium model, mathematics, pedagogy, STEM

Procedia PDF Downloads 51
150 KPI and Tool for the Evaluation of Competency in Warehouse Management for Furniture Business

Authors: Kritchakhris Na-Wattanaprasert

Abstract:

The objective of this research is to design and develop a prototype of a key performance indicator system this is suitable for warehouse management in a case study and use requirement. In this study, we design a prototype of key performance indicator system (KPI) for warehouse case study of furniture business by methodology in step of identify scope of the research and study related papers, gather necessary data and users requirement, develop key performance indicator base on balance scorecard, design pro and database for key performance indicator, coding the program and set relationship of database and finally testing and debugging each module. This study use Balance Scorecard (BSC) for selecting and grouping key performance indicator. The system developed by using Microsoft SQL Server 2010 is used to create the system database. In regard to visual-programming language, Microsoft Visual C# 2010 is chosen as the graphic user interface development tool. This system consists of six main menus: menu login, menu main data, menu financial perspective, menu customer perspective, menu internal, and menu learning and growth perspective. Each menu consists of key performance indicator form. Each form contains a data import section, a data input section, a data searches – edit section, and a report section. The system generates outputs in 5 main reports, the KPI detail reports, KPI summary report, KPI graph report, benchmarking summary report and benchmarking graph report. The user will select the condition of the report and period time. As the system has been developed and tested, discovers that it is one of the ways to judging the extent to warehouse objectives had been achieved. Moreover, it encourages the warehouse functional proceed with more efficiency. In order to be useful propose for other industries, can adjust this system appropriately. To increase the usefulness of the key performance indicator system, the recommendations for further development are as follows: -The warehouse should review the target value and set the better suitable target periodically under the situation fluctuated in the future. -The warehouse should review the key performance indicators and set the better suitable key performance indicators periodically under the situation fluctuated in the future for increasing competitiveness and take advantage of new opportunities.

Keywords: key performance indicator, warehouse management, warehouse operation, logistics management

Procedia PDF Downloads 413
149 Fraud in the Higher Educational Institutions in Assam, India: Issues and Challenges

Authors: Kalidas Sarma

Abstract:

Fraud is a social problem changing with social change and it has a regional and global impact. Introduction of private domain in higher education along with public institutions has led to commercialization of higher education which encourages unprecedented mushrooming of private institutions resulting in fraudulent activities in higher educational institutions in Assam, India. Presently, fraud has been noticed in in-service promotion, fake entry qualification by teachers in different levels of work-place by using fake master degrees, master of philosophy and doctor of philosophy degree certificates. The aim and objective of the study are to identify grey areas in maintenance of quality in higher educational institutions in Assam and also to draw the contour for planning and implementation. This study is based on both primary and secondary data collected through questionnaire and seeking information through Right to Information Act 2005. In Assam, there are 301 undergraduate and graduate colleges distributed in 27 (Twenty seven) administrative districts with 11000 (Eleven thousand) college teachers. Total 421 (Four hundred twenty one) college teachers from the 14 respondent colleges have been taken for analysis. Data collected has been analyzed by using 'Hypertext Pre-processor' (PhP) application with My Sequel Structure Query Language (MySQL) and Google Map Application Programming Interface (APIs). Graph has been generated by using open source tool Chart.js. Spatial distribution maps have been generated with the help of geo-references of the colleges. The result shows: (i) the violation of University Grants Commission's (UGCs) Regulation for the awards of M. Phil/Ph.D. clearly exhibits. (ii) There is a gap between apex regulatory bodies of higher education at national and as well as state level to check fraud. (iii) Mala fide 'No Objection Certificate' (NOC) issued by the Government of Assam have played pivotal role in the occurrence of fraudulent practices in higher educational institutions of Assam. (iv) Violation of verdict of the Hon'ble Supreme Court of India regarding territorial jurisdiction of Universities for the awards of Ph.D. and M. Phil degrees in distance mode/study centre is also a responsible factor for the spread of these academic frauds in Assam and other states. The challenges and mitigation of these issues have been discussed.

Keywords: Assam, fraud, higher education, mitigation

Procedia PDF Downloads 136
148 A Multi-Criteria Decision Making Approach for Disassembly-To-Order Systems under Uncertainty

Authors: Ammar Y. Alqahtani

Abstract:

In order to minimize the negative impact on the environment, it is essential to manage the waste that generated from the premature disposal of end-of-life (EOL) products properly. Consequently, government and international organizations introduced new policies and regulations to minimize the amount of waste being sent to landfills. Moreover, the consumers’ awareness regards environment has forced original equipment manufacturers to consider being more environmentally conscious. Therefore, manufacturers have thought of different ways to deal with waste generated from EOL products viz., remanufacturing, reusing, recycling, or disposing of EOL products. The rate of depletion of virgin natural resources and their dependency on the natural resources can be reduced by manufacturers when EOL products are treated as remanufactured, reused, or recycled, as well as this will cut on the amount of harmful waste sent to landfills. However, disposal of EOL products contributes to the problem and therefore is used as a last option. Number of EOL need to be estimated in order to fulfill the components demand. Then, disassembly process needs to be performed to extract individual components and subassemblies. Smart products, built with sensors embedded and network connectivity to enable the collection and exchange of data, utilize sensors that are implanted into products during production. These sensors are used for remanufacturers to predict an optimal warranty policy and time period that should be offered to customers who purchase remanufactured components and products. Sensor-provided data can help to evaluate the overall condition of a product, as well as the remaining lives of product components, prior to perform a disassembly process. In this paper, a multi-period disassembly-to-order (DTO) model is developed that takes into consideration the different system uncertainties. The DTO model is solved using Nonlinear Programming (NLP) in multiple periods. A DTO system is considered where a variety of EOL products are purchased for disassembly. The model’s main objective is to determine the best combination of EOL products to be purchased from every supplier in each period which maximized the total profit of the system while satisfying the demand. This paper also addressed the impact of sensor embedded products on the cost of warranties. Lastly, this paper presented and analyzed a case study involving various simulation conditions to illustrate the applicability of the model.

Keywords: closed-loop supply chains, environmentally conscious manufacturing, product recovery, reverse logistics

Procedia PDF Downloads 115
147 Review of the Model-Based Supply Chain Management Research in the Construction Industry

Authors: Aspasia Koutsokosta, Stefanos Katsavounis

Abstract:

This paper reviews the model-based qualitative and quantitative Operations Management research in the context of Construction Supply Chain Management (CSCM). Construction industry has been traditionally blamed for low productivity, cost and time overruns, waste, high fragmentation and adversarial relationships. The construction industry has been slower than other industries to employ the Supply Chain Management (SCM) concept and develop models that support the decision-making and planning. However the last decade there is a distinct shift from a project-based to a supply-based approach of construction management. CSCM comes up as a new promising management tool of construction operations and improves the performance of construction projects in terms of cost, time and quality. Modeling the Construction Supply Chain (CSC) offers the means to reap the benefits of SCM, make informed decisions and gain competitive advantage. Different modeling approaches and methodologies have been applied in the multi-disciplinary and heterogeneous research field of CSCM. The literature review reveals that a considerable percentage of CSC modeling accommodates conceptual or process models which discuss general management frameworks and do not relate to acknowledged soft OR methods. We particularly focus on the model-based quantitative research and categorize the CSCM models depending on their scope, mathematical formulation, structure, objectives, solution approach, software used and decision level. Although over the last few years there has been clearly an increase of research papers on quantitative CSC models, we identify that the relevant literature is very fragmented with limited applications of simulation, mathematical programming and simulation-based optimization. Most applications are project-specific or study only parts of the supply system. Thus, some complex interdependencies within construction are neglected and the implementation of the integrated supply chain management is hindered. We conclude this paper by giving future research directions and emphasizing the need to develop robust mathematical optimization models for the CSC. We stress that CSC modeling needs a multi-dimensional, system-wide and long-term perspective. Finally, prior applications of SCM to other industries have to be taken into account in order to model CSCs, but not without the consequential reform of generic concepts to match the unique characteristics of the construction industry.

Keywords: construction supply chain management, modeling, operations research, optimization, simulation

Procedia PDF Downloads 489
146 Degradation Study of Food Colorants by SingletOxygen

Authors: A. T. Toci, M. V. B. Zanoni

Abstract:

The advanced oxidation processes have been defined as destructive technologies treatment of wastewater. These involve the formation of powerful oxidizing agents (usually hydroxyl radical .OH) capable of reacting with organic compounds present in wastewater, transforming damaging substances in CO2 and H2O (mineralization) or other innocuous products. However, the photochemical degradation with singlet oxygen has been little explored as oxidative pathway for the treatment of effluents containing food colorants. The molecular oxygen is an effective suppressor of organic molecules in the triplet excited state. One of the possible results of the physical withdrawal is the formation of singlet oxygen. Studies with singlet oxygen (1O2) show an high reactivity of the excited state of the molecule with olefins, aromatic hydrocarbons and a number of other organic and inorganic compounds. Its reactivity is about 2500 times larger than the oxygen in the ground state. Thus, in this work, it was studied the degradation of some dyes used in food industry (tartrazine, sunset yellow, erythrosine and carmoisine) by singlet oxygen. The sensitizer used for generating the 1O2 was methylene blue, which has a quantum yield generation of 0.50. Samples were prepared in water at a concentration of 5 ppm and irradiated with a sunlight simulator (Newport brand, model no. 67005) by consecutive 8h. The absorption spectra of UV-Vis molecules were made each hour irradiation. The degradation kinetics for each dye was determined using the maximum length of each dye absorption. The analysis by UV-Vis revealed that the processes were very efficient for the colorants sunset yellow and carmoisine. Both presented degradation kinetics of order zero with degradation constants 0.416 and 0.104, respectively. In the case of sunset yellow degradation reached 53% after 7h irradiation, Demonstrating the process efficiency. The erithrosine presented during the period irradiated a oscillating degradation kinetics, which requires further study. In the other hand, tartrazine was stable in the presence of 1O2. The investigation of the dyes degradation products owned degradation by 1O2 are underway, the techniques used for this are MS and NMR. The results of this study will enable the application of the cleanest methods for the treatment of industrial effluents, as there are other non-toxic and polluting molecules to generate 1O2.

Keywords: food colourants, singlet oxygen, degradation, wastewater, oxidative

Procedia PDF Downloads 374
145 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator

Procedia PDF Downloads 47
144 6-Degree-Of-Freedom Spacecraft Motion Planning via Model Predictive Control and Dual Quaternions

Authors: Omer Burak Iskender, Keck Voon Ling, Vincent Dubanchet, Luca Simonini

Abstract:

This paper presents Guidance and Control (G&C) strategy to approach and synchronize with potentially rotating targets. The proposed strategy generates and tracks a safe trajectory for space servicing missions, including tasks like approaching, inspecting, and capturing. The main objective of this paper is to validate the G&C laws using a Hardware-In-the-Loop (HIL) setup with realistic rendezvous and docking equipment. Throughout this work, the assumption of full relative state feedback is relaxed by onboard sensors that bring realistic errors and delays and, while the proposed closed loop approach demonstrates the robustness to the above mentioned challenge. Moreover, G&C blocks are unified via the Model Predictive Control (MPC) paradigm, and the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description. In this work, G&C is formulated as a convex optimization problem where constraints such as thruster limits and the output constraints are explicitly handled. Furthermore, the Monte-Carlo method is used to evaluate the robustness of the proposed method to the initial condition errors, the uncertainty of the target's motion and attitude, and actuator errors. A capture scenario is tested with the robotic test bench that has onboard sensors which estimate the position and orientation of a drifting satellite through camera imagery. Finally, the approach is compared with currently used robust H-infinity controllers and guidance profile provided by the industrial partner. The HIL experiments demonstrate that the proposed strategy is a potential candidate for future space servicing missions because 1) the algorithm is real-time implementable as convex programming offers deterministic convergence properties and guarantee finite time solution, 2) critical physical and output constraints are respected, 3) robustness to sensor errors and uncertainties in the system is proven, 4) couples translational motion with rotational motion.

Keywords: dual quaternion, model predictive control, real-time experimental test, rendezvous and docking, spacecraft autonomy, space servicing

Procedia PDF Downloads 119
143 Mental Health Impacts of COVID-19 on Diverse Youth and Families in Canada

Authors: Lucksini Raveendran

Abstract:

Introduction: This mixed-methods study focuses on the experiences of ethnocultural youth and families in Canada, identifying key barriers and opportunities to inform service programming and policies that can better meet their mental health needs during the COVID-19 pandemic and beyond. Methods: Mental Health Commission of Canada's Headstrong initiative administered the youth survey (April – June 2020) and family survey (June – August 2020) with a total sample size of 137 and 481 respondents, respectively. Thematic analysis was conducted to identify key challenges faced, coping strategies used, and help-seeking behaviours. A similar approach was also applied to the family survey data, but instead, a representative sample was collated to analyze geographically variable and ethnically diverse subgroups. Results and analysis: Multiple challenges have impacted families, including increased feelings of loneliness and distress from border travel restrictions, especially among those navigating pregnancy alone or managing children with developmental needs, which is often understudied. Also, marginalized groups were disproportionately affected by inequitable access to communication technologies, further deepening the digital divide. Some reported living in congregated homes with regular conflicts, thus leading to increased anxiety and exposure to violence. For many families, urbanicity and ethnicity played a key role in how families reported coping with feelings of uncertainty while managing work commitments, navigating community resources, fulfilling care responsibilities, and homeschooling children of all ages. Despite these challenges, there was evidence of post-traumatic growth and building community resiliency. Conclusions and implications for policy, practice, or additional research: There is a need to foster opportunities to promote and sustain mental health, wellness, and resilience for families through social connections. Also, intersectionality must be embedded in the collection, analysis, and application of data to improve equitable access to evidence-based and recovery-oriented mental health supports among diverse families in Canada. Lastly, address future research on the long-term COVID-19 impacts of travel border restrictions on family wellness.

Keywords: mental health, youth mental health, family wellness, health equity

Procedia PDF Downloads 64
142 Q-Efficient Solutions of Vector Optimization via Algebraic Concepts

Authors: Elham Kiyani

Abstract:

In this paper, we first introduce the concept of Q-efficient solutions in a real linear space not necessarily endowed with a topology, where Q is some nonempty (not necessarily convex) set. We also used the scalarization technique including the Gerstewitz function generated by a nonconvex set to characterize these Q-efficient solutions. The algebraic concepts of interior and closure are useful to study optimization problems without topology. Studying nonconvex vector optimization is valuable since topological interior is equal to algebraic interior for a convex cone. So, we use the algebraic concepts of interior and closure to define Q-weak efficient solutions and Q-Henig proper efficient solutions of set-valued optimization problems, where Q is not a convex cone. Optimization problems with set-valued maps have a wide range of applications, so it is expected that there will be a useful analytical tool in optimization theory for set-valued maps. These kind of optimization problems are closely related to stochastic programming, control theory, and economic theory. The paper focus on nonconvex problems, the results are obtained by assuming generalized non-convexity assumptions on the data of the problem. In convex problems, main mathematical tools are convex separation theorems, alternative theorems, and algebraic counterparts of some usual topological concepts, while in nonconvex problems, we need a nonconvex separation function. Thus, we consider the Gerstewitz function generated by a general set in a real linear space and re-examine its properties in the more general setting. A useful approach for solving a vector problem is to reduce it to a scalar problem. In general, scalarization means the replacement of a vector optimization problem by a suitable scalar problem which tends to be an optimization problem with a real valued objective function. The Gerstewitz function is well known and widely used in optimization as the basis of the scalarization. The essential properties of the Gerstewitz function, which are well known in the topological framework, are studied by using algebraic counterparts rather than the topological concepts of interior and closure. Therefore, properties of the Gerstewitz function, when it takes values just in a real linear space are studied, and we use it to characterize Q-efficient solutions of vector problems whose image space is not endowed with any particular topology. Therefore, we deal with a constrained vector optimization problem in a real linear space without assuming any topology, and also Q-weak efficient and Q-proper efficient solutions in the senses of Henig are defined. Moreover, by means of the Gerstewitz function, we provide some necessary and sufficient optimality conditions for set-valued vector optimization problems.

Keywords: algebraic interior, Gerstewitz function, vector closure, vector optimization

Procedia PDF Downloads 195