Search results for: safety performance functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17243

Search results for: safety performance functions

8003 Pharmaceutical Equivalence of Some Injectable Gentamicin Generics Used in Veterinary Practice in Nigeria

Authors: F. A. Gberindyer, M. O.Abatan, A. B. Saba

Abstract:

Background: Gentamicin is an aminoglycoside antibiotic used in the treatment of infections caused by Gram-negative aerobic bacteria organisms in human and animals. In Nigeria, there are arrays of multisource generic versions of injectable gentamicin sulphate in the drug markets. There is a high prevalence of counterfeit and substandard drugs in the third world countries with consequent effect on their therapeutic efficacy and safety. Aim: The aim of this study was to investigate pharmaceutical equivalence of some of these generics used in veterinary practice in Nigeria. Methodology: About 20 generics of injectable gentamicin sulphate were sampled randomly across Nigeria but 15 were analyzed for identity and potency. Identity test was done using Fourier transform infra red spectroscopy and the spectral for each product compared with that of the USP reference standard for similarity. Microbiological assay using agar diffusion method with E. coli as a test organism on nutrient agar was employed and the respective diameters of bacterial inhibition zones obtained after 24 hour incubation at 37°C. The percent potency for each product was thereafter calculated and compared with the official specification. Result And Discussion: None of the generics is produced in any African country. About 75 % of the products are imported from China whereas 60 % of the veterinary generics are manufactured in Holland. Absorption spectra for the reference and test samples were similar. Percent potencies of all test products were within the official specification of 95-115 %. Nigeria relies solely on imported injectable gentamicin sulphate products. All sampled generic versions passed both identity and potency tests. Clinicians should ensure that drugs are used rationally since the converse could be contributing to the therapeutic failures reported for most of these generics. Bioequivalence study is recommended to ascertain their interchangeability when parenteral extra venous routes are indicated.

Keywords: generics, gentamicin, identity, multisource, potency

Procedia PDF Downloads 417
8002 The European Pharmacy Market: The Density and its Influencing Factors

Authors: Selina Schwaabe

Abstract:

Community pharmacies deliver high-quality health care and are responsible for medication safety. During the pandemic, accessibility to the nearest pharmacy became more essential to get vaccinated against Covid-19 and to get medical aid. The government's goal is to ensure nationwide, reachable, and affordable medical health care services by pharmacies. Therefore, the density of community pharmacies matters. Overall, the density of community pharmacies is fluctuating, with slightly decreasing tendencies in some countries. So far, the literature has shown that changes in the system affect prices and density. However, a European overview of the development of the density of community pharmacies and its triggers is still missing. This research is essential to counteract against decreasing density consulting in a lack of professional health care through pharmacies. The analysis focuses on liberal versus regulated market structures, mail-order prescription drug regulation, and third-party ownership consequences. In a panel analysis, the relative influence of the measures is examined across 27 European countries over the last 21 years. In addition, the paper examines seven selected countries in depth, selected for the substantial variance in their pharmacy system: Germany, Austria, Portugal, Denmark, Sweden, Finland and Poland. Overall, the results show that regulated pharmacy markets have over 10.75 pharmacies/100.000 inhabitants more than liberal markets. Further, mail-order prescription drugs decrease the density by -17.98 pharmacies/100.000 inhabitants. Countries allowing third-party ownership have 7.67 pharmacies/100.000 inhabitants more. The results are statistically significant at a 0.001 level. The output of this analysis recommends regulated pharmacy markets, with a ban on mail-order prescription drugs allowing third-party ownership to support nationwide medical health care through community pharmacies.

Keywords: community pharmacy, market conditions, pharmacy, pharmacy market, pharmacy lobby, prescription, e-prescription, ownership structures

Procedia PDF Downloads 114
8001 Comparing the Apparent Error Rate of Gender Specifying from Human Skeletal Remains by Using Classification and Cluster Methods

Authors: Jularat Chumnaul

Abstract:

In forensic science, corpses from various homicides are different; there are both complete and incomplete, depending on causes of death or forms of homicide. For example, some corpses are cut into pieces, some are camouflaged by dumping into the river, some are buried, some are burned to destroy the evidence, and others. If the corpses are incomplete, it can lead to the difficulty of personally identifying because some tissues and bones are destroyed. To specify gender of the corpses from skeletal remains, the most precise method is DNA identification. However, this method is costly and takes longer so that other identification techniques are used instead. The first technique that is widely used is considering the features of bones. In general, an evidence from the corpses such as some pieces of bones, especially the skull and pelvis can be used to identify their gender. To use this technique, forensic scientists are required observation skills in order to classify the difference between male and female bones. Although this technique is uncomplicated, saving time and cost, and the forensic scientists can fairly accurately determine gender by using this technique (apparently an accuracy rate of 90% or more), the crucial disadvantage is there are only some positions of skeleton that can be used to specify gender such as supraorbital ridge, nuchal crest, temporal lobe, mandible, and chin. Therefore, the skeletal remains that will be used have to be complete. The other technique that is widely used for gender specifying in forensic science and archeology is skeletal measurements. The advantage of this method is it can be used in several positions in one piece of bones, and it can be used even if the bones are not complete. In this study, the classification and cluster analysis are applied to this technique, including the Kth Nearest Neighbor Classification, Classification Tree, Ward Linkage Cluster, K-mean Cluster, and Two Step Cluster. The data contains 507 particular individuals and 9 skeletal measurements (diameter measurements), and the performance of five methods are investigated by considering the apparent error rate (APER). The results from this study indicate that the Two Step Cluster and Kth Nearest Neighbor method seem to be suitable to specify gender from human skeletal remains because both yield small apparent error rate of 0.20% and 4.14%, respectively. On the other hand, the Classification Tree, Ward Linkage Cluster, and K-mean Cluster method are not appropriate since they yield large apparent error rate of 10.65%, 10.65%, and 16.37%, respectively. However, there are other ways to evaluate the performance of classification such as an estimate of the error rate using the holdout procedure or misclassification costs, and the difference methods can make the different conclusions.

Keywords: skeletal measurements, classification, cluster, apparent error rate

Procedia PDF Downloads 238
8000 Concept-Based Assessment in Curriculum

Authors: Nandu C. Nair, Kamal Bijlani

Abstract:

This paper proposes a concept-based assessment to track the performance of the students. The idea behind this approach is to map the exam questions with the concepts learned in the course. So at the end of the course, each student will know how well he learned each concept. This system will give a self assessment for the students as well as instructor. By analyzing the score of all students, instructor can decide some concepts need to be teaching again or not. The system’s efficiency is proved using three courses from M-tech program in E-Learning technologies and results show that the concept-wise assessment improved the score in final exam of majority students on various courses.

Keywords: assessment, concept, examination, question, score

Procedia PDF Downloads 451
7999 Polysaccharides as Pour Point Depressants

Authors: Ali M. EL-Soll

Abstract:

Physical properties of Sarir waxy crude oil was investigated, pour-point was determined using ASTM D-79 procedure, paraffin content and carbon number distribution of the paraffin was determined using gas liquid Chromatography(GLC), polymeric additives were prepared and their structures were confirmed using IR spectrophotometer. The molecular weight and molecular weigh distribution of these additives were determined by gel permeation chromatography (GPC). the performance of the synthesized additives as pour-point depressants was evaluated, for the mentioned crude oil.

Keywords: sarir, waxy, crude, pour point, depressants

Procedia PDF Downloads 443
7998 Linguistic Competencies of Students with Hearing Impairment

Authors: Munawar Malik, Muntaha Ahmad, Khalil Ullah Khan

Abstract:

Linguistic abilities in students with hearing impairment yet remain a concern for educationists. The emerging technological support and provisions in recent era vows to have addressed the situation and claims significant contribution in terms of linguistic repertoire. Being a descriptive and quantitative paradigm of study, the purpose of this research set forth was to assess linguistic competencies of students with hearing impairment in English language. The goals were further broken down to identify level of reading abilities in the subject population. The population involved students with HI studying at higher secondary level in Lahore. Simple random sampling technique was used to choose a sample of fifty students. A purposive curriculum-based assessment was designed in line with accelerated learning program by Punjab Government, to assess Linguistic competence among the sample. Further to it, an Informal Reading Inventory (IRI) corresponding to reading levels was also developed by researchers duly validated and piloted before the final use. Descriptive and inferential statistics were utilized to reach to the findings. Spearman’s correlation was used to find out relationship between degree of hearing loss, grade level, gender and type of amplification device. Independent sample t-test was used to compare means among groups. Major findings of the study revealed that students with hearing impairment exhibit significant deviation from the mean scores when compared in terms of grades, severity and amplification device. The study divulged that respective students with HI have yet failed to qualify an independent level of reading according to their grades as majority falls at frustration level of word recognition and passage comprehension. The poorer performance can be attributed to lower linguistic competence as it shows in the frustration levels of reading, writing and comprehension. The correlation analysis did reflect an improved performance grade wise, however scores could only correspond to frustration level and independent levels was never achieved. Reported achievements at instructional level of subject population may further to linguistic skills if practiced purposively.

Keywords: linguistic competence, hearing impairment, reading levels, educationist

Procedia PDF Downloads 52
7997 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 75
7996 Cable De-Commissioning of Legacy Accelerators at CERN

Authors: Adya Uluwita, Fernando Pedrosa, Georgi Georgiev, Christian Bernard, Raoul Masterson

Abstract:

CERN is an international organisation founded by 23 countries that provide the particle physics community with excellence in particle accelerators and other related facilities. Founded in 1954, CERN has a wide range of accelerators that allow groundbreaking science to be conducted. Accelerators bring particles to high levels of energy and make them collide with each other or with fixed targets, creating specific conditions that are of high interest to physicists. A chain of accelerators is used to ramp up the energy of particles and eventually inject them into the largest and most recent one: the Large Hadron Collider (LHC). Among this chain of machines is, for instance the Proton Synchrotron, which was started in 1959 and is still in operation. These machines, called "injectors”, keep evolving over time, as well as the related infrastructure. Massive decommissioning of obsolete cables started in 2015 at CERN in the frame of the so-called "injectors de-cabling project phase 1". Its goal was to replace aging cables and remove unused ones, freeing space for new cables necessary for upgrades and consolidation campaigns. To proceed with the de-cabling, a project co-ordination team was assembled. The start of this project led to the investigation of legacy cables throughout the organisation. The identification of cables stacked over half a century proved to be arduous. Phase 1 of the injectors de-cabling was implemented for 3 years with success after overcoming some difficulties. Phase 2, started 3 years later, focused on improving safety and structure with the introduction of a quality assurance procedure. This paper discusses the implementation of this quality assurance procedure throughout phase 2 of the project and the transition between the two phases. Over hundreds of kilometres of cable were removed in the injectors complex at CERN from 2015 to 2023.

Keywords: CERN, de-cabling, injectors, quality assurance procedure

Procedia PDF Downloads 23
7995 The Competitiveness of Small and Medium Sized Enterprises: Digital Transformation of Business Models

Authors: Chante Van Tonder, Bart Bossink, Chris Schachtebeck, Cecile Nieuwenhuizen

Abstract:

Small and Medium-Sized Enterprises (SMEs) play a key role in national economies around the world, being contributors to economic and social well-being. Due to this, the success, growth and competitiveness of SMEs are critical. However, there are many factors that undermine this, such as resource constraints, poor information communication infrastructure (ICT), skills shortages and poor management. The Fourth Industrial Revolution offers new tools and opportunities such as digital transformation and business model innovation (BMI) to the SME sector to enhance its competitiveness. Adopting and leveraging digital technologies such as cloud, mobile technologies, big data and analytics can significantly improve business efficiencies, value proposition and customer experiences. Digital transformation can contribute to the growth and competitiveness of SMEs. However, SMEs are lagging behind in the participation of digital transformation. Extant research lacks conceptual and empirical research on how digital transformation drives BMI and the impact it has on the growth and competitiveness of SMEs. The purpose of the study is, therefore, to close this gap by developing and empirically validating a conceptual model to determine if SMEs are achieving BMI through digital transformation and how this is impacting the growth, competitiveness and overall business performance. An empirical study is being conducted on 300 SMEs, consisting of 150 South-African and 150 Dutch SMEs, to achieve this purpose. Structural equation modeling is used, since it is a multivariate statistical analysis technique that is used to analyse structural relationships and is a suitable research method to test the hypotheses in the model. Empirical research is needed to gather more insight into how and if SMEs are digitally transformed and how BMI can be driven through digital transformation. The findings of this study can be used by SME business owners, managers and employees at all levels. The findings will indicate if digital transformation can indeed impact the growth, competitiveness and overall performance of an SME, reiterating the importance and potential benefits of adopting digital technologies. In addition, the findings will also exhibit how BMI can be achieved in light of digital transformation. This study contributes to the body of knowledge in a highly relevant and important topic in management studies by analysing the impact of digital transformation on BMI on a large number of SMEs that are distinctly different in economic and cultural factors

Keywords: business models, business model innovation, digital transformation, SMEs

Procedia PDF Downloads 226
7994 Effects of Spectrotemporal Modulation of Music Profiles on Coherence of Cardiovascular Rhythms

Authors: I-Hui Hsieh, Yu-Hsuan Hu

Abstract:

The powerful effect of music is often associated with changes in physiological responses such as heart rate and respiration. Previous studies demonstrate that Mayer waves of blood pressure, the spontaneous rhythm occurring at 0.1 Hz, corresponds to a progressive crescendo of the musical phrase. However, music contain dynamic changes in temporal and spectral features. As such, it remains unclear which aspects of musical structures optimally affect synchronization of cardiovascular rhythms. This study investigates the independent contribution of spectral pattern, temporal pattern, and dissonance level on synchronization of cardiovascular rhythms. The regularity of acoustical patterns occurring at a periodic rhythm of 0.1 Hz is hypothesized to elicit the strongest coherence of cardiovascular rhythms. Music excerpts taken from twelve pieces of Western classical repertoire were modulated to contain varying degrees of pattern regularity of the acoustic envelope structure. Three levels of dissonance were manipulated by varying the harmonic structure of the accompanying chords. Electrocardiogram and photoplethysmography signals were recorded for 5 minutes of baseline and simultaneously while participants listen to music excerpts randomly presented over headphones in a sitting position. Participants were asked to indicate the pleasantness of each music excerpt by adjusting via a slider presented on screen. Analysis of the Fourier spectral power of blood pressure around 0.1 Hz showed a significant difference between music excerpts characterized by spectral and temporal pattern regularity compared to the same content in random pattern. Phase coherence between heart rate and blood pressure increased significantly during listening to spectrally-regular phrases compared to its matched control phrases. The degree of dissonance of the accompanying chord sequence correlated with level of coherence between heart rate and blood pressure. Results suggest that low-level auditory features of music can entrain coherence of autonomic physiological variables. These findings have potential implications for using music as a clinical and therapeutic intervention for regulating cardiovascular functions.

Keywords: cardiovascular rhythms, coherence, dissonance, pattern regularity

Procedia PDF Downloads 141
7993 Proinflammatory Response of Agglomerated TiO2 Nanoparticles in Human-Immune Cells

Authors: Vaiyapuri Subbarayn Periasamy, Jegan Athinarayanan, Ali A. Alshatwi

Abstract:

The widespread use of Titanium oxide nanoparticles (TiO2-NPs), now are found with different physic-chemical properties (size, shape, chemical properties, agglomeration, etc.) in many processed foods, agricultural chemicals, biomedical products, food packaging and food contact materials, personal care products, and other consumer products used in daily life. Growing evidences have been highlighted that there are risks of physico-chemical properties dependent toxicity with special attention to “TiO2-NPs and human immune system”. Unfortunately, agglomeration and aggregation have frequently been ignored in immuno-toxicological studies, even though agglomeration and aggregation would be expected to affect nanotoxicity since it changes the size, shape, surface area, and other properties of the TiO2-NPs. In this present investigation, we assessed the immune toxic effect of TiO2-NPs on human immune cells Total WBC including Lymphocytes (T cells (CD3+), T helper cells (CD3+, CD4+), Suppressor/cytotoxic T cells (CD3+/CD8+) and NK cells (CD3-/CD16+ and CD56+), Monocytes (CD14+, CD3-) and B lymphocytes (CD19+, CD3-) in order to find the immunological response (IL1A, IL1B, IL2 IL-4, IL5 IL-6, IL-10, IL-12, IL-13, IFN-γ, TGF-β, and TNF-a) and redox gene regulation (TNF, p53, BCl-2, CAT, GSTA4, TNF, CYP1A, POR, SOD1, GSTM3, GPX1, and GSR1)-linking physicochemical properties with special reference to agglomeration of TiO2-NPs. Our findings suggest that TiO2-NPs altered cytokine production, enhanced phagocytic indexing, metabolic stress through specific immune regulatory- genes expression in different WBC subsets and may contribute to pro-inflammatory response. Although TiO2-NPs have great advantages in the personal care products, biomedical, food and agricultural products, its chronic and acute immune-toxicity still need to be assessed carefully with special reference to food and environmental safety.

Keywords: TiO2 nanoparticles, oxidative stress, cytokine, human immune cells

Procedia PDF Downloads 390
7992 A User Interface for Easiest Way Image Encryption with Chaos

Authors: D. López-Mancilla, J. M. Roblero-Villa

Abstract:

Since 1990, the research on chaotic dynamics has received considerable attention, particularly in light of potential applications of this phenomenon in secure communications. Data encryption using chaotic systems was reported in the 90's as a new approach for signal encoding that differs from the conventional methods that use numerical algorithms as the encryption key. The algorithms for image encryption have received a lot of attention because of the need to find security on image transmission in real time over the internet and wireless networks. Known algorithms for image encryption, like the standard of data encryption (DES), have the drawback of low level of efficiency when the image is large. The encrypting based on chaos proposes a new and efficient way to get a fast and highly secure image encryption. In this work, a user interface for image encryption and a novel and easiest way to encrypt images using chaos are presented. The main idea is to reshape any image into a n-dimensional vector and combine it with vector extracted from a chaotic system, in such a way that the vector image can be hidden within the chaotic vector. Once this is done, an array is formed with the original dimensions of the image and turns again. An analysis of the security of encryption from the images using statistical analysis is made and is used a stage of optimization for image encryption security and, at the same time, the image can be accurately recovered. The user interface uses the algorithms designed for the encryption of images, allowing you to read an image from the hard drive or another external device. The user interface, encrypt the image allowing three modes of encryption. These modes are given by three different chaotic systems that the user can choose. Once encrypted image, is possible to observe the safety analysis and save it on the hard disk. The main results of this study show that this simple method of encryption, using the optimization stage, allows an encryption security, competitive with complicated encryption methods used in other works. In addition, the user interface allows encrypting image with chaos, and to submit it through any public communication channel, including internet.

Keywords: image encryption, chaos, secure communications, user interface

Procedia PDF Downloads 474
7991 Implementation of ADETRAN Language Using Message Passing Interface

Authors: Akiyoshi Wakatani

Abstract:

This paper describes the Message Passing Interface (MPI) implementation of ADETRAN language, and its evaluation on SX-ACE supercomputers. ADETRAN language includes pdo statement that specifies the data distribution and parallel computations and pass statement that specifies the redistribution of arrays. Two methods for implementation of pass statement are discussed and the performance evaluation using Splitting-Up CG method is presented. The effectiveness of the parallelization is evaluated and the advantage of one dimensional distribution is empirically confirmed by using the results of experiments.

Keywords: iterative methods, array redistribution, translator, distributed memory

Procedia PDF Downloads 258
7990 Systematic and Simple Guidance for Feed Forward Design in Model Predictive Control

Authors: Shukri Dughman, Anthony Rossiter

Abstract:

This paper builds on earlier work which demonstrated that Model Predictive Control (MPC) may give a poor choice of default feed forward compensator. By first demonstrating the impact of future information of target changes on the performance, this paper proposes a pragmatic method for identifying the amount of future information on the target that can be utilised effectively in both finite and infinite horizon algorithms. Numerical illustrations in MATLAB give evidence of the efficacy of the proposal.

Keywords: model predictive control, tracking control, advance knowledge, feed forward

Procedia PDF Downloads 523
7989 Zinc Oxide Varistor Performance: A 3D Network Model

Authors: Benjamin Kaufmann, Michael Hofstätter, Nadine Raidl, Peter Supancic

Abstract:

ZnO varistors are the leading overvoltage protection elements in today’s electronic industry. Their highly non-linear current-voltage characteristics, very fast response times, good reliability and attractive cost of production are unique in this field. There are challenges and questions unsolved. Especially, the urge to create even smaller, versatile and reliable parts, that fit industry’s demands, brings manufacturers to the limits of their abilities. Although, the varistor effect of sintered ZnO is known since the 1960’s, and a lot of work was done on this field to explain the sudden exponential increase of conductivity, the strict dependency on sinter parameters, as well as the influence of the complex microstructure, is not sufficiently understood. For further enhancement and down-scaling of varistors, a better understanding of the microscopic processes is needed. This work attempts a microscopic approach to investigate ZnO varistor performance. In order to cope with the polycrystalline varistor ceramic and in order to account for all possible current paths through the material, a preferably realistic model of the microstructure was set up in the form of three-dimensional networks where every grain has a constant electric potential, and voltage drop occurs only at the grain boundaries. The electro-thermal workload, depending on different grain size distributions, was investigated as well as the influence of the metal-semiconductor contact between the electrodes and the ZnO grains. A number of experimental methods are used, firstly, to feed the simulations with realistic parameters and, secondly, to verify the obtained results. These methods are: a micro 4-point probes method system (M4PPS) to investigate the current-voltage characteristics between single ZnO grains and between ZnO grains and the metal electrode inside the varistor, micro lock-in infrared thermography (MLIRT) to detect current paths, electron back scattering diffraction and piezoresponse force microscopy to determine grain orientations, atom probe to determine atomic substituents, Kelvin probe force microscopy for investigating grain surface potentials. The simulations showed that, within a critical voltage range, the current flow is localized along paths which represent only a tiny part of the available volume. This effect could be observed via MLIRT. Furthermore, the simulations exhibit that the electric power density, which is inversely proportional to the number of active current paths, since this number determines the electrical active volume, is dependent on the grain size distribution. M4PPS measurements showed that the electrode-grain contacts behave like Schottky diodes and are crucial for asymmetric current path development. Furthermore, evaluation of actual data suggests that current flow is influenced by grain orientations. The present results deepen the knowledge of influencing microscopic factors on ZnO varistor performance and can give some recommendations on fabrication for obtaining more reliable ZnO varistors.

Keywords: metal-semiconductor contact, Schottky diode, varistor, zinc oxide

Procedia PDF Downloads 269
7988 Managing Inter-Organizational Innovation Project: Systematic Review of Literature

Authors: Lamin B Ceesay, Cecilia Rossignoli

Abstract:

Inter-organizational collaboration is a growing phenomenon in both research and practice. The partnership between organizations enables firms to leverage external resources, experiences, and technology that lie with other firms. This collaborative practice is a source of improved business model performance, technological advancement, and increased competitive advantage for firms. However, the competitive intents, and even diverse institutional logics of firms, make inter-firm innovation-based partnership even more complex, and its governance more challenging. The purpose of this paper is to present a systematic review of research linking the inter-organizational relationship of firms with their innovation practice and specify the different project management issues and gaps addressed in previous research. To do this, we employed a systematic review of the literature on inter-organizational innovation using two complementary scholarly databases - ScienceDirect and Web of Science (WoS). Article scoping relies on the combination of keywords based on similar terms used in the literature:(1) inter-organizational relationship, (2) business network, (3) inter-firm project, and (4) innovation network. These searches were conducted in the title, abstract, and keywords of conceptual and empirical research papers done in English. Our search covers between 2010 to 2019. We applied several exclusion criteria including Papers published outside the years under the review, papers in a language other than English, papers neither listed in WoS nor ScienceDirect and papers that are not sharply related to the inter-organizational innovation-based partnership were removed. After all relevant search criteria were applied, a final list of 84 papers constitutes the data for this review. Our review revealed an increasing evolution of inter-organizational relationship research during the period under the review. The descriptive analysis of papers according to Journal outlets finds that International Journal of Project Management (IJPM), Journal of Industrial Marketing, Journal of Business Research (JBR), etc. are the leading journal outlets for research in the inter-organizational innovation project. The review also finds that Qualitative methods and quantitative approaches respectively are the leading research methods adopted by scholars in the field. However, literature review and conceptual papers constitute the least in the field. During the content analysis of the selected papers, we read the content of each paper and found that the selected papers try to address one of the three phenomena in inter-organizational innovation research: (1) project antecedents; (2) project management and (3) project performance outcomes. We found that these categories are not mutually exclusive, but rather interdependent. This categorization also helped us to organize the fragmented literature in the field. While a significant percentage of the literature discussed project management issues, we found fewer extant literature on project antecedents and performance. As a result of this, we organized the future research agenda addressed in several papers by linking them with the under-researched themes in the field, thus providing great potential to advance future research agenda especially, in the under-researched themes in the field. Finally, our paper reveals that research on inter-organizational innovation project is generally fragmented which hinders a better understanding of the field. Thus, this paper contributes to the understanding of the field by organizing and discussing the extant literature to advance the theory and application of inter-organizational relationship.

Keywords: inter-organizational relationship, inter-firm collaboration, innovation projects, project management, systematic review

Procedia PDF Downloads 100
7987 Electrochemical Activity of NiCo-GDC Cermet Anode for Solid Oxide Fuel Cells Operated in Methane

Authors: Kamolvara Sirisuksakulchai, Soamwadee Chaianansutcharit, Kazunori Sato

Abstract:

Solid Oxide Fuel Cells (SOFCs) have been considered as one of the most efficient large unit power generators for household and industrial applications. The efficiency of an electronic cell depends mainly on the electrochemical reactions in the anode. The development of anode materials has been intensely studied to achieve higher kinetic rates of redox reactions and lower internal resistance. Recent studies have introduced an efficient cermet (ceramic-metallic) material for its ability in fuel oxidation and oxide conduction. This could expand the reactive site, also known as the triple-phase boundary (TPB), thus increasing the overall performance. In this study, a bimetallic catalyst Ni₀.₇₅Co₀.₂₅Oₓ was combined with Gd₀.₁Ce₀.₉O₁.₉₅ (GDC) to be used as a cermet anode (NiCo-GDC) for an anode-supported type SOFC. The synthesis of Ni₀.₇₅Co₀.₂₅Oₓ was carried out by ball milling NiO and Co3O4 powders in ethanol and calcined at 1000 °C. The Gd₀.₁Ce₀.₉O₁.₉₅ was prepared by a urea co-precipitation method. Precursors of Gd(NO₃)₃·6H₂O and Ce(NO₃)₃·6H₂O were dissolved in distilled water with the addition of urea and were heated subsequently. The heated mixture product was filtered and rinsed thoroughly, then dried and calcined at 800 °C and 1500 °C, respectively. The two powders were combined followed by pelletization and sintering at 1100 °C to form an anode support layer. The fabrications of an electrolyte layer and cathode layer were conducted. The electrochemical performance in H₂ was measured from 800 °C to 600 °C while for CH₄ was from 750 °C to 600 °C. The maximum power density at 750 °C in H₂ was 13% higher than in CH₄. The difference in performance was due to higher polarization resistances confirmed by the impedance spectra. According to the standard enthalpy, the dissociation energy of C-H bonds in CH₄ is slightly higher than the H-H bond H₂. The dissociation of CH₄ could be the cause of resistance within the anode material. The results from lower temperatures showed a descending trend of power density in relevance to the increased polarization resistance. This was due to lowering conductivity when the temperature decreases. The long-term stability was measured at 750 °C in CH₄ monitoring at 12-hour intervals. The maximum power density tends to increase gradually with time while the resistances were maintained. This suggests the enhanced stability from charge transfer activities in doped ceria due to the transition of Ce⁴⁺ ↔ Ce³⁺ at low oxygen partial pressure and high-temperature atmosphere. However, the power density started to drop after 60 h, and the cell potential also dropped from 0.3249 V to 0.2850 V. These phenomena was confirmed by a shifted impedance spectra indicating a higher ohmic resistance. The observation by FESEM and EDX-mapping suggests the degradation due to mass transport of ions in the electrolyte while the anode microstructure was still maintained. In summary, the electrochemical test and stability test for 60 h was achieved by NiCo-GDC cermet anode. Coke deposition was not detected after operation in CH₄, hence this confirms the superior properties of the bimetallic cermet anode over typical Ni-GDC.

Keywords: bimetallic catalyst, ceria-based SOFCs, methane oxidation, solid oxide fuel cell

Procedia PDF Downloads 140
7986 Evaluation of a Method for the Virtual Design of a Software-based Approach for Electronic Fuse Protection in Automotive Applications

Authors: Dominic Huschke, Rudolf Keil

Abstract:

New driving functionalities like highly automated driving have a major impact on the electrics/electronics architecture of future vehicles and inevitably lead to higher safety requirements. Partly due to these increased requirements, the vehicle industry is increasingly looking at semiconductor switches as an alternative to conventional melting fuses. The protective functionality of semiconductor switches can be implemented in hardware as well as in software. A current approach discussed in science and industry is the implementation of a model of the protected low voltage power cable on a microcontroller to calculate its temperature. Here, the information regarding the current is provided by the continuous current measurement of the semiconductor switch. The signal to open the semiconductor switch is provided by the microcontroller when a previously defined limit for the temperature of the low voltage power cable is exceeded. A setup for the testing of the described principle for electronic fuse protection of a low voltage power cable is built and successfullyvalidated with experiments afterwards. Here, the evaluation criterion is the deviation of the measured temperature of the low voltage power cable from the specified limit temperature when the semiconductor switch is opened. The analysis is carried out with an assumed ambient temperature as well as with a measured ambient temperature. Subsequently, the experimentally performed investigations are simulated in a virtual environment. The explicit focus is on the simulation of the behavior of the microcontroller with an implemented model of a low voltage power cable in a real-time environment. Subsequently, the generated results are compared with those of the experiments. Based on this, the completely virtual design of the described approach is assumed to be valid.

Keywords: automotive wire harness, electronic fuse protection, low voltage power cable, semiconductor-based fuses, software-based validation

Procedia PDF Downloads 95
7985 Theoretical Discussion on the Classification of Risks in Supply Chain Management

Authors: Liane Marcia Freitas Silva, Fernando Augusto Silva Marins, Maria Silene Alexandre Leite

Abstract:

The adoption of a network structure, like in the supply chains, favors the increase of dependence between companies and, by consequence, their vulnerability. Environment disasters, sociopolitical and economical events, and the dynamics of supply chains elevate the uncertainty of their operation, favoring the occurrence of events that can generate break up in the operations and other undesired consequences. Thus, supply chains are exposed to various risks that can influence the profitability of companies involved, and there are several previous studies that have proposed risk classification models in order to categorize the risks and to manage them. The objective of this paper is to analyze and discuss thirty of these risk classification models by means a theoretical survey. The research method adopted for analyzing and discussion includes three phases: The identification of the types of risks proposed in each one of the thirty models, the grouping of them considering equivalent concepts associated to their definitions, and, the analysis of these risks groups, evaluating their similarities and differences. After these analyses, it was possible to conclude that, in fact, there is more than thirty risks types identified in the literature of Supply Chains, but some of them are identical despite of be used distinct terms to characterize them, because different criteria for risk classification are adopted by researchers. In short, it is observed that some types of risks are identified as risk source for supply chains, such as, demand risk, environmental risk and safety risk. On the other hand, other types of risks are identified by the consequences that they can generate for the supply chains, such as, the reputation risk, the asset depreciation risk and the competitive risk. These results are consequence of the disagreements between researchers on risk classification, mainly about what is risk event and about what is the consequence of risk occurrence. An additional study is in developing in order to clarify how the risks can be generated, and which are the characteristics of the components in a Supply Chain that leads to occurrence of risk.

Keywords: sisks classification, survey, supply chain management, theoretical discussion

Procedia PDF Downloads 620
7984 Design and Modeling of Human Middle Ear for Harmonic Response Analysis

Authors: Shende Suraj Balu, A. B. Deoghare, K. M. Pandey

Abstract:

The human middle ear (ME) is a delicate and vital organ. It has a complex structure that performs various functions such as receiving sound pressure and producing vibrations of eardrum and propagating it to inner ear. It consists of Tympanic Membrane (TM), three auditory ossicles, various ligament structures and muscles. Incidents such as traumata, infections, ossification of ossicular structures and other pathologies may damage the ME organs. The conditions can be surgically treated by employing prosthesis. However, the suitability of the prosthesis needs to be examined in advance prior to the surgery. Few decades ago, this issue was addressed and analyzed by developing an equivalent representation either in the form of spring mass system, electrical system using R-L-C circuit or developing an approximated CAD model. But, nowadays a three-dimensional ME model can be constructed using micro X-Ray Computed Tomography (μCT) scan data. Moreover, the concern about patient specific integrity pertaining to the disease can be examined well in advance. The current research work emphasizes to develop the ME model from the stacks of μCT images which are used as input file to MIMICS Research 19.0 (Materialise Interactive Medical Image Control System) software. A stack of CT images is converted into geometrical surface model to build accurate morphology of ME. The work is further extended to understand the dynamic behaviour of Harmonic response of the stapes footplate and umbo for different sound pressure levels applied at lateral side of eardrum using finite element approach. The pathological condition Cholesteatoma of ME is investigated to obtain peak to peak displacement of stapes footplate and umbo. Apart from this condition, other pathologies, mainly, changes in the stiffness of stapedial ligament, TM thickness and ossicular chain separation and fixation are also explored. The developed model of ME for pathologies is validated by comparing the results available in the literatures and also with the results of a normal ME to calculate the percentage loss in hearing capability.

Keywords: computed tomography (μCT), human middle ear (ME), harmonic response, pathologies, tympanic membrane (TM)

Procedia PDF Downloads 159
7983 Detecting Impact of Allowance Trading Behaviors on Distribution of NOx Emission Reductions under the Clean Air Interstate Rule

Authors: Yuanxiaoyue Yang

Abstract:

Emissions trading, or ‘cap-and-trade', has been long promoted by economists as a more cost-effective pollution control approach than traditional performance standard approaches. While there is a large body of empirical evidence for the overall effectiveness of emissions trading, relatively little attention has been paid to other unintended consequences brought by emissions trading. One important consequence is that cap-and-trade could introduce the risk of creating high-level emission concentrations in areas where emitting facilities purchase a large number of emission allowances, which may cause an unequal distribution of environmental benefits. This study will contribute to the current environmental policy literature by linking trading activity with environmental injustice concerns and empirically analyzing the causal relationship between trading activity and emissions reduction under a cap-and-trade program for the first time. To investigate the potential environmental injustice concern in cap-and-trade, this paper uses a differences-in-differences (DID) with instrumental variable method to identify the causal effect of allowance trading behaviors on emission reduction levels under the clean air interstate rule (CAIR), a cap-and-trade program targeting on the power sector in the eastern US. The major data source is the facility-year level emissions and allowance transaction data collected from US EPA air market databases. While polluting facilities from CAIR are the treatment group under our DID identification, we use non-CAIR facilities from the Acid Rain Program - another NOx control program without a trading scheme – as the control group. To isolate the causal effects of trading behaviors on emissions reduction, we also use eligibility for CAIR participation as the instrumental variable. The DID results indicate that the CAIR program was able to reduce NOx emissions from affected facilities by about 10% more than facilities who did not participate in the CAIR program. Therefore, CAIR achieves excellent overall performance in emissions reduction. The IV regression results also indicate that compared with non-CAIR facilities, purchasing emission permits still decreases a CAIR participating facility’s emissions level significantly. This result implies that even buyers under the cap-and-trade program have achieved a great amount of emissions reduction. Therefore, we conclude little evidence of environmental injustice from the CAIR program.

Keywords: air pollution, cap-and-trade, emissions trading, environmental justice

Procedia PDF Downloads 132
7982 Improved Technology Portfolio Management via Sustainability Analysis

Authors: Ali Al-Shehri, Abdulaziz Al-Qasim, Abdulkarim Sofi, Ali Yousef

Abstract:

The oil and gas industry has played a major role in improving the prosperity of mankind and driving the world economy. According to the International Energy Agency (IEA) and Integrated Environmental Assessment (EIA) estimates, the world will continue to rely heavily on hydrocarbons for decades to come. This growing energy demand mandates taking sustainability measures to prolong the availability of reliable and affordable energy sources, and ensure lowering its environmental impact. Unlike any other industry, the oil and gas upstream operations are energy-intensive and scattered over large zonal areas. These challenging conditions require unique sustainability solutions. In recent years there has been a concerted effort by the oil and gas industry to develop and deploy innovative technologies to: maximize efficiency, reduce carbon footprint, reduce CO2 emissions, and optimize resources and material consumption. In the past, the main driver for research and development (R&D) in the exploration and production sector was primarily driven by maximizing profit through higher hydrocarbon recovery and new discoveries. Environmental-friendly and sustainable technologies are increasingly being deployed to balance sustainability and profitability. Analyzing technology and its sustainability impact is increasingly being used in corporate decision-making for improved portfolio management and allocating valuable resources toward technology R&D.This paper articulates and discusses a novel workflow to identify strategic sustainable technologies for improved portfolio management by addressing existing and future upstream challenges. It uses a systematic approach that relies on sustainability key performance indicators (KPI’s) including energy efficiency quotient, carbon footprint, and CO2 emissions. The paper provides examples of various technologies including CCS, reducing water cuts, automation, using renewables, energy efficiency, etc. The use of 4IR technologies such as Artificial Intelligence, Machine Learning, and Data Analytics are also discussed. Overlapping technologies, areas of collaboration and synergistic relationships are identified. The unique sustainability analyses provide improved decision-making on technology portfolio management.

Keywords: sustainability, oil& gas, technology portfolio, key performance indicator

Procedia PDF Downloads 171
7981 A Randomised Simulation Study to Assess the Impact of a Focussed Crew Resource Management Course on UK Medical Students

Authors: S. MacDougall-Davis, S. Wysling, R. Willmore

Abstract:

Background: The application of good non-technical skills, also known as crew resource management (CRM), is central to the delivery of safe, effective healthcare. The authors have been running remote trauma courses for over 10 years, primarily focussing on developing participants’ CRM in time-critical, high-stress clinical situations. The course has undergone an iterative process over the past 10 years. We employ a number of experiential learning techniques for improving CRM, including small group workshops, military command tasks, high fidelity simulations with reflective debriefs, and a ‘flipped classroom’, where participants are asked to create their own simulations and assess and debrief their colleagues’ CRM. We created a randomised simulation study to assess the impact of our course on UK medical students’ CRM, both at an individual and a teams level. Methods: Sixteen students took part. Four clinical scenarios were devised, designed to be of similar urgency and complexity. Professional moulage effects and experienced clinical actors were used to increase fidelity and to further simulate high-stress environments. Participants were block randomised into teams of 4; each team was randomly assigned to one pre-course simulation. They then underwent our 5 day remote trauma CRM course. Post-course, students were re-randomised into four new teams; each was randomly assigned to a post-course simulation. All simulations were videoed. The footage was reviewed by two independent CRM-trained assessors, who were blinded to the before/after the status of the simulations. Assessors used the internationally validated team emergency assessment measure (TEAM) to evaluate key areas of team performance, as well as a global outcome rating. Prior to the study, assessors had scored two unrelated scenarios using the same assessment tool, demonstrating 89% concordance. Participants also completed pre- and post-course questionnaires. Likert scales were used to rate individuals’ perceived NTS ability and their confidence to work in a team in time-critical, high-stress situations. Results: Following participation in the course, a significant improvement in CRM was observed in all areas of team performance. Furthermore, the global outcome rating for team performance was markedly improved (40-70%; mean 55%), thus demonstrating an impact at Level 4 of Kirkpatrick’s hierarchy. At an individual level, participants’ self-perceived CRM improved markedly after the course (35-70% absolute improvement; mean 55%), as did their confidence to work in a team in high-stress situations. Conclusion: Our study demonstrates that with a short, cost-effective course, using easily reproducible teaching sessions, it is possible to significantly improve participants’ CRM skills, both at an individual and, perhaps more importantly, at a teams level. The successful functioning of multi-disciplinary teams is vital in a healthcare setting, particularly in high-stress, time-critical situations. Good CRM is of paramount importance in these scenarios. The authors believe that these concepts should be introduced from the earliest stages of medical education, thus promoting a culture of effective CRM and embedding an early appreciation of the importance of these skills in enabling safe and effective healthcare.

Keywords: crew resource management, non-technical skills, training, simulation

Procedia PDF Downloads 121
7980 The Study of ZigBee Protocol Application in Wireless Networks

Authors: Ardavan Zamanpour, Somaieh Yassari

Abstract:

ZigBee protocol network was developed in industries and MIT laboratory in 1997. ZigBee is a wireless networking technology by alliance ZigBee which is designed to low board and low data rate applications. It is a Protocol which connects between electrical devises with very low energy and cost. The first version of IEEE 802.15.4 which was formed ZigBee was based on 2.4GHZ MHZ 912MHZ 868 frequency band. The name of system is often reminded random directions that bees (BEES) traversing during pollination of products. Such as alloy of the ways in which information packets are traversed within the mesh network. This paper aims to study the performance and effectiveness of this protocol in wireless networks.

Keywords: ZigBee, protocol, wireless, networks

Procedia PDF Downloads 354
7979 Cryptocurrency-Based Mobile Payments with Near-Field Communication-Enabled Devices

Authors: Marko Niinimaki

Abstract:

Cryptocurrencies are getting increasingly popular, but very few of them can be conveniently used in daily mobile phone purchases. To solve this problem, we demonstrate how to build a functional prototype of a mobile cryptocurrency-based e-commerce application the communicates with Near-Field Communication (NFC) tags. Using the system, users are able to purchase physical items with an NFC tag that contains an e-commerce URL. The payment is done simply by touching the tag with a mobile device and accepting the payment. Our method is constructive: we describe the design and technologies used in the implementation and evaluate the security and performance of the solution. Our main finding is that the analysis and measurements show that our solution is feasible for e-commerce.

Keywords: cryptocurrency, e-commerce, NFC, mobile devices

Procedia PDF Downloads 168
7978 Rubric in Vocational Education

Authors: Azmanirah Ab Rahman, Jamil Ahmad, Ruhizan Muhammad Yasin

Abstract:

Rubric is a very important tool for teachers and students for a variety of purposes. Teachers use the rubric for evaluating student work while students use rubrics for self-assessment. Therefore, this paper was emphasized scoring rubric as a scoring tool for teachers in an environment of Competency Based Education and Training (CBET) in Malaysia Vocational College. A total of three teachers in the fields of electrical and electronics engineering were interviewed to identify how the use of rubrics practiced since vocational transformation implemented in 2012. Overall holistic rubric used to determine the performance of students in the skills area.

Keywords: rubric, vocational education, teachers, CBET

Procedia PDF Downloads 489
7977 Carl Wernicke and the Origin of Neurolinguistics in Breslau: A Case Study in the Domain of the History of Linguistics

Authors: Aneta Daniel

Abstract:

The subject of the study is the exploration of the origins and dynamics of the development of language studies, which have been labelled as neurolinguistics. It is worth mentioning that the origins of neurolinguistics are to be found in the research conducted by German scientists before the Second World War in Breslau Universität (presently Wroclaw). The dominant figure in these studies was professor Carl Wernicke, whose students continued and creatively developed projects of their master within this area. Professor Carl Wernicke, a German physician, anatomist, psychiatrist, and neuropathologist, is primarily known for his influential research on aphasia. His research, as well as those conducted by professor Paul Broca, has led to breakthroughs in the location of brain functions, particularly speech. Years later the theses of the pioneers of cognitive neurology (Carl Wernicke and Paul Broca) were developed by other neurolinguists. The main objective of the investigation is the reconstruction of the group of scientists –the students of Carl Wernicke– who contributed to the development of neurolinguistics. The scholars were mainly neurologists and psychiatrists and dealt with the branch of science that had not been named neurolinguistics at that time. The profiles of the scholars will be analysed and presented as the members of the group of researchers who have contributed to the breakthroughs in psychology and neuroscience. The research material consists of archival records documenting the research of professor Carl Wernicke and the researchers from Breslau (presently Wroclaw) which is one of the fastest growing cities in Europe. In 1870, when Carl Wernicke became the medical doctor, Breslau was full of cultural events: festivals and circus shows were held in the city center. Today we can come back to these events due to 'Breslauer Zeitung (1870)', which precisely describes all the events that took place on particular days. It is worth noting that those were the beginnings of antisemitism in Breslau. Many theses and articles that have survived in the libraries in Wroclaw and all over the world contribute to the development of neuroscience. The history of research on the brain and speech analysis, including the history of psychology and neuroscience, areas from which neurolinguistics is derived, will be presented.

Keywords: Aphasia, brain injury, Carl Wernicke, language, neurolinguistics

Procedia PDF Downloads 373
7976 Anomalies of Visual Perceptual Skills Amongst School Children in Foundation Phase in Olievenhoutbosch, Gauteng Province, South Africa

Authors: Maria Bonolo Mathevula

Abstract:

Background: Children are important members of communities playing major role in the future of any given country (Pera, Fails, Gelsomini, &Garzotto, 2018). Visual Perceptual Skills (VPSs) in children are important health aspect of early childhood development through the Foundation Phases in school. Subsequently, children should undergo visual screening before commencement of schooling for early diagnosis ofVPSs anomalies because the primary role of VPSs is to capacitate children with academic performance in general. Aim : The aim of this study was to determine the anomalies of visual VPSs amongst school children in Foundation Phase. The study’s objectives were to determine the prevalence of VPSs anomalies amongst school children in Foundation Phase; Determine the relationship between children’s academic and VPSs anomalies; and to investigate the relationship between VPSs anomalies and refractive error. Methodology: This study was a mixed method whereby triangulated qualitative (interviews) and quantitative (questionnaire and clinical data) was used. This was, therefore, descriptive by nature. The study’s target population was school children in Foundation Phase. The study followed purposive sampling method. School children in Foundation Phase were purposively sampled to form part of this study provided their parents have given a signed the consent. Data was collected by the use of standardized interviews; questionnaire; clinical data card, and TVPS standard data card. Results: Although the study is still ongoing, the preliminary study outcome based on data collected from one of the Foundation Phases have suggested the following:While VPSs anomalies is not prevalent, it, however, have indirect relationship with children’s academic performance in Foundation phase; Notably, VPSs anomalies and refractive error are directly related since majority of children with refractive error, specifically compound hyperopic astigmatism, failed most subtests of TVPS standard tests. Conclusion: Based on the study’s preliminary findings, it was clear that optometrists still have a lot to do in as far as researching on VPSs is concerned. Furthermore, the researcher recommends that optometrist, as the primary healthcare professionals, should also conduct the school-readiness pre-assessment on children before commencement of their grades in Foundation phase.

Keywords: foundation phase, visual perceptual skills, school children, refractive error

Procedia PDF Downloads 93
7975 Development of a Decision Model to Optimize Total Cost in Food Supply Chain

Authors: Henry Lau, Dilupa Nakandala, Li Zhao

Abstract:

All along the length of the supply chain, fresh food firms face the challenge of managing both product quality, due to the perishable nature of the products, and product cost. This paper develops a method to assist logistics managers upstream in the fresh food supply chain in making cost optimized decisions regarding transportation, with the objective of minimizing the total cost while maintaining the quality of food products above acceptable levels. Considering the case of multiple fresh food products collected from multiple farms being transported to a warehouse or a retailer, this study develops a total cost model that includes various costs incurred during transportation. The practical application of the model is illustrated by using several computational intelligence approaches including Genetic Algorithms (GA), Fuzzy Genetic Algorithms (FGA) as well as an improved Simulated Annealing (SA) procedure applied with a repair mechanism for efficiency benchmarking. We demonstrate the practical viability of these approaches by using a simulation study based on pertinent data and evaluate the simulation outcomes. The application of the proposed total cost model was demonstrated using three approaches of GA, FGA and SA with a repair mechanism. All three approaches are adoptable; however, based on the performance evaluation, it was evident that the FGA is more likely to produce a better performance than the other two approaches of GA and SA. This study provides a pragmatic approach for supporting logistics and supply chain practitioners in fresh food industry in making important decisions on the arrangements and procedures related to the transportation of multiple fresh food products to a warehouse from multiple farms in a cost-effective way without compromising product quality. This study extends the literature on cold supply chain management by investigating cost and quality optimization in a multi-product scenario from farms to a retailer and, minimizing cost by managing the quality above expected quality levels at delivery. The scalability of the proposed generic function enables the application to alternative situations in practice such as different storage environments and transportation conditions.

Keywords: cost optimization, food supply chain, fuzzy sets, genetic algorithms, product quality, transportation

Procedia PDF Downloads 206
7974 Canada's "Flattened Curve": A Geospatail Temporal Analysis of Canada's Amelioration of The Sars-Cov-2 Pandemic Through Coordinated Government Intervention

Authors: John Ahluwalia

Abstract:

As an affluent first-world nation, Canada took swift and comprehensive action during the outbreak of the SARS-CoV-2 (COVID-19) pandemic compared to other countries in the same socio-economic cohort. The United States has stumbled to overcome obstacles most developed nations have faced, which has led to significantly more per capita cases and deaths. The initial outbreaks of COVID-19 occurred in the US and Canada within days of each other and posed similar potentially catastrophic threats to public health, the economy, and governmental stability. On a macro level, events that take place in the US have a direct impact on Canada. For example, both countries tend to enter and exit economic recessions at approximately the same time, they are each other’s largest trading partners, and their currencies are inexorably linked. Variables intrinsic to Canada’s national infrastructure have been instrumental in the country’s efforts to flatten the curve of COVID-19 cases and deaths. Canada’s coordinated multi-level governmental effort has allowed it to create and enforce policies related to COVID-19 at both the national and provincial levels. Canada’s policy of universal health care is another variable. Health care and public health measures are enforced on a provincial level, and it is within each province’s jurisdiction to dictate standards for public safety based on scientific evidence. Rather than introducing confusion and the possibility of competition for resources such as PPE and vaccines, Canada’s multi-level chain of government authority has provided consistent policies supporting national public health and local delivery of medical care. This paper will demonstrate that the coordinated efforts on provincial and federal levels have been the linchpin in Canada’s relative success in containing the deadly spread of the COVID-19 virus.

Keywords: COVID-19, canada, GIS, geospatial analysis

Procedia PDF Downloads 60