Search results for: processing parameters
11746 Preparation of Melt Electrospun Polylactic Acid Nanofibers with Optimum Conditions
Authors: Amir Doustgani
Abstract:
Melt electrospinning is a safe and simple technique for the production of micro and nanofibers which can be an alternative to conventional solvent electrospinning. The effects of various melt-electrospinning parameters, including molecular weight, electric field strength, flow rate and temperature on the morphology and fiber diameter of polylactic acid were studied. It was shown that molecular weight was the predominant factor in determining the obtainable fiber diameter of the collected fibers. An orthogonal design was used to examine process parameters. Results showed that molecular weight is the most effective parameter on the average fiber diameter of melt electrospun PLA nanofibers and the flow rate has the less important impact. Mean fiber diameter increased by increasing MW and flow rate, but decreased by increasing electric field strength and temperature. MFD of optimized fibers was below 100 nm and the result of software was in good agreement with the experimental condition.Keywords: fiber formation, processing, spinning, melt blowing
Procedia PDF Downloads 43711745 Optimization of End Milling Process Parameters for Minimization of Surface Roughness of AISI D2 Steel
Authors: Pankaj Chandna, Dinesh Kumar
Abstract:
The present work analyses different parameters of end milling to minimize the surface roughness for AISI D2 steel. D2 Steel is generally used for stamping or forming dies, punches, forming rolls, knives, slitters, shear blades, tools, scrap choppers, tyre shredders etc. Surface roughness is one of the main indices that determines the quality of machined products and is influenced by various cutting parameters. In machining operations, achieving desired surface quality by optimization of machining parameters, is a challenging job. In case of mating components the surface roughness become more essential and is influenced by the cutting parameters, because, these quality structures are highly correlated and are expected to be influenced directly or indirectly by the direct effect of process parameters or their interactive effects (i.e. on process environment). In this work, the effects of selected process parameters on surface roughness and subsequent setting of parameters with the levels have been accomplished by Taguchi’s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L9 orthogonal array. Experimental investigation of the end milling of AISI D2 steel with carbide tool by varying feed, speed and depth of cut and the surface roughness has been measured using surface roughness tester. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the contribution of the different process parameters on the process.Keywords: D2 steel, orthogonal array, optimization, surface roughness, Taguchi methodology
Procedia PDF Downloads 54311744 Online Monitoring Rheological Property of Polymer Melt during Injection Molding
Authors: Chung-Chih Lin, Chien-Liang Wu
Abstract:
The detection of the polymer melt state during manufacture process is regarded as an efficient way to control the molded part quality in advance. Online monitoring rheological property of polymer melt during processing procedure provides an approach to understand the melt state immediately. Rheological property reflects the polymer melt state at different processing parameters and is very important in injection molding process especially. An approach that demonstrates how to calculate rheological property of polymer melt through in-process measurement, using injection molding as an example, is proposed in this study. The system consists of two sensors and a data acquisition module can process the measured data, which are used for the calculation of rheological properties of polymer melt. The rheological properties of polymer melt discussed in this study include shear rate and viscosity which are investigated with respect to injection speed and melt temperature. The results show that the effect of injection speed on the rheological properties is apparent, especially for high melt temperature and should be considered for precision molding process.Keywords: injection molding, melt viscosity, shear rate, monitoring
Procedia PDF Downloads 37911743 Resume Ranking Using Custom Word2vec and Rule-Based Natural Language Processing Techniques
Authors: Subodh Chandra Shakya, Rajendra Sapkota, Aakash Tamang, Shushant Pudasaini, Sujan Adhikari, Sajjan Adhikari
Abstract:
Lots of efforts have been made in order to measure the semantic similarity between the text corpora in the documents. Techniques have been evolved to measure the similarity of two documents. One such state-of-art technique in the field of Natural Language Processing (NLP) is word to vector models, which converts the words into their word-embedding and measures the similarity between the vectors. We found this to be quite useful for the task of resume ranking. So, this research paper is the implementation of the word2vec model along with other Natural Language Processing techniques in order to rank the resumes for the particular job description so as to automate the process of hiring. The research paper proposes the system and the findings that were made during the process of building the system.Keywords: chunking, document similarity, information extraction, natural language processing, word2vec, word embedding
Procedia PDF Downloads 15711742 Medical Imaging Fusion: A Teaching-Learning Simulation Environment
Authors: Cristina Maria Ribeiro Martins Pereira Caridade, Ana Rita Ferreira Morais
Abstract:
The use of computational tools has become essential in the context of interactive learning, especially in engineering education. In the medical industry, teaching medical image processing techniques is a crucial part of training biomedical engineers, as it has integrated applications with healthcare facilities and hospitals. The aim of this article is to present a teaching-learning simulation tool developed in MATLAB using a graphical user interface for medical image fusion that explores different image fusion methodologies and processes in combination with image pre-processing techniques. The application uses different algorithms and medical fusion techniques in real time, allowing you to view original images and fusion images, compare processed and original images, adjust parameters, and save images. The tool proposed in an innovative teaching and learning environment consists of a dynamic and motivating teaching simulation for biomedical engineering students to acquire knowledge about medical image fusion techniques and necessary skills for the training of biomedical engineers. In conclusion, the developed simulation tool provides real-time visualization of the original and fusion images and the possibility to test, evaluate and progress the student’s knowledge about the fusion of medical images. It also facilitates the exploration of medical imaging applications, specifically image fusion, which is critical in the medical industry. Teachers and students can make adjustments and/or create new functions, making the simulation environment adaptable to new techniques and methodologies.Keywords: image fusion, image processing, teaching-learning simulation tool, biomedical engineering education
Procedia PDF Downloads 12911741 Degumming of Eri Silk Fabric with Ionic Liquid
Authors: Shweta K. Vyas, Rakesh Musale, Sanjeev R. Shukla
Abstract:
Eri silk is a non mulberry silk which is obtained without killing the silkworms and hence it is also known as Ahmisa silk. In the present study, the results on degumming of eri silk with alkaline peroxide have been compared with those obtained by using ionic liquid (IL) 1-Butyl-3-methylimidazolium chloride [BMIM]Cl. Experiments were designed to find out the optimum processing parameters for degumming of eri silk by response surface methodology. The statistical software, Design-Expert 6.0 was used for regression analysis and graphical analysis of the responses obtained by running the set of designed experiments. Analysis of variance (ANOVA) was used to estimate the statistical parameters. The polynomial equation of quadratic order was employed to fit the experimental data. The quality and model terms were evaluated by F-test. Three dimensional surface plots were prepared to study the effect of variables on different responses. The optimum conditions for IL treatment were selected from predicted combinations and the experiments were repeated under these conditions to determine the reproducibility.Keywords: silk degumming, ionic liquid, response surface methodology, ANOVA
Procedia PDF Downloads 59111740 Studying the Influence of Stir Cast Parameters on Properties of Al6061/Al2O3 Composite
Authors: Anuj Suhag, Rahul Dayal
Abstract:
Aluminum matrix composites (AMCs) refer to the class of metal matrix composites that are lightweight but high performance aluminum centric material systems. The reinforcement in AMCs could be in the form of continuous/discontinuous fibers, whisker or particulates, in volume fractions. Properties of AMCs can be altered to the requirements of different industrial applications by suitable combinations of matrix, reinforcement and processing route. This work focuses on the fabrication of aluminum alloy (Al6061) matrix composites (AMCs) reinforced with 5 and 3 wt% Al2O3 particulates of 45µm using stir casting route. The aim of the present work is to investigate the effects of process parameters, determined by design of experiments, on microhardness, microstructure, Charpy impact strength, surface roughness and tensile properties of the AMC.Keywords: aluminium matrix composite, Charpy impact strength test, composite materials, matrix, metal matrix composite, surface roughness, reinforcement
Procedia PDF Downloads 65411739 Decision Making, Reward Processing and Response Selection
Authors: Benmansour Nassima, Benmansour Souheyla
Abstract:
The appropriate integration of reward processing and decision making provided by the environment is vital for behavioural success and individuals’ well being in everyday life. Functional neurological investigation has already provided an inclusive image on affective and emotional (motivational) processing in the healthy human brain and has recently focused its interest also on the assessment of brain function in anxious and depressed individuals. This article offers an overview on the theoretical approaches that relate emotion and decision-making, and spotlights investigation with anxious or depressed individuals to reveal how emotions can interfere with decision-making. This research aims at incorporating the emotional structure based on response and stimulation with a Bayesian approach to decision-making in terms of probability and value processing. It seeks to show how studies of individuals with emotional dysfunctions bear out that alterations of decision-making can be considered in terms of altered probability and value subtraction. The utmost objective is to critically determine if the probabilistic representation of belief affords could be a critical approach to scrutinize alterations in probability and value representation in subjective with anxiety and depression, and draw round the general implications of this approach.Keywords: decision-making, motivation, alteration, reward processing, response selection
Procedia PDF Downloads 47611738 The Use of Image Processing Responses Tools Applied to Analysing Bouguer Gravity Anomaly Map (Tangier-Tetuan's Area-Morocco)
Authors: Saad Bakkali
Abstract:
Image processing is a powerful tool for the enhancement of edges in images used in the interpretation of geophysical potential field data. Arial and terrestrial gravimetric surveys were carried out in the region of Tangier-Tetuan. From the observed and measured data of gravity Bouguer gravity anomalies map was prepared. This paper reports the results and interpretations of the transformed maps of Bouguer gravity anomaly of the Tangier-Tetuan area using image processing. Filtering analysis based on classical image process was applied. Operator image process like logarithmic and gamma correction are used. This paper also present the results obtained from this image processing analysis of the enhancement edges of the Bouguer gravity anomaly map of the Tangier-Tetuan zone.Keywords: bouguer, tangier, filtering, gamma correction, logarithmic enhancement edges
Procedia PDF Downloads 42011737 Efficient Layout-Aware Pretraining for Multimodal Form Understanding
Authors: Armineh Nourbakhsh, Sameena Shah, Carolyn Rose
Abstract:
Layout-aware language models have been used to create multimodal representations for documents that are in image form, achieving relatively high accuracy in document understanding tasks. However, the large number of parameters in the resulting models makes building and using them prohibitive without access to high-performing processing units with large memory capacity. We propose an alternative approach that can create efficient representations without the need for a neural visual backbone. This leads to an 80% reduction in the number of parameters compared to the smallest SOTA model, widely expanding applicability. In addition, our layout embeddings are pre-trained on spatial and visual cues alone and only fused with text embeddings in downstream tasks, which can facilitate applicability to low-resource of multi-lingual domains. Despite using 2.5% of training data, we show competitive performance on two form understanding tasks: semantic labeling and link prediction.Keywords: layout understanding, form understanding, multimodal document understanding, bias-augmented attention
Procedia PDF Downloads 14711736 Influence of Processing Parameters in Selective Laser Melting on the Microstructure and Mechanical Properties of Ti/Tin Composites With in-situ and ex-situ Reinforcement
Authors: C. Sánchez de Rojas Candela, A. Riquelme, P. Rodrigo, M. D. Escalera-Rodríguez, B. Torres, J. Rams
Abstract:
Selective laser melting is one of the most commonly used AM techniques. In it, a thin layer of metallic powder is deposited, and a laser is used to melt selected zones. The accumulation of layers, each one molten in the preselected zones, gives rise to the formation of a 3D sample with a nearly arbitrary design. To ensure that the properties of the final parts match those of the powder, all the process is carried out in an inert atmosphere, preferentially Ar, although this gas could be substituted. Ti6Al4V alloy is widely used in multiple industrial applications such as aerospace, maritime transport and biomedical, due to its properties. However, due to the demanding requirements of these applications, greater hardness and wear resistance are necessary, together with a better machining capacity, which currently limits its commercialization. To improve these properties, in this study, Selective Laser Melting (SLM) is used to manufacture Ti/TiN metal matrix composites with in-situ and ex-situ titanium nitride reinforcement where the scanning speed is modified (from 28.5 up to 65 mm/s) to study the influence of the processing parameters in SLM. A one-step method of nitriding the Ti6Al4V alloy is carried out to create in-situ TiN reinforcement in a reactive atmosphere and it is compared with ex-situ composites manufactured by previous mixture of both the titanium alloy powder and the ceramic reinforcement particles. The microstructure and mechanical properties of the different Ti/TiN composite materials have been analyzed. As a result, the existence of a similar matrix has been confirmed in in-situ and ex-situ fabrications and the growth mechanisms of the nitrides have been studied. An increase in the mechanical properties with respect to the initial alloy has been observed in both cases and related to changes in their microstructure. Specifically, a greater improvement (around 30.65%) has been identified in those manufactured by the in-situ method at low speeds although other properties such as porosity must be improved for their future industrial applicability.Keywords: in-situ reinforcement, nitriding reaction, selective laser melting, titanium nitride
Procedia PDF Downloads 7811735 Dissolved Oxygen Prediction Using Support Vector Machine
Authors: Sorayya Malek, Mogeeb Mosleh, Sharifah M. Syed
Abstract:
In this study, Support Vector Machine (SVM) technique was applied to predict the dichotomized value of Dissolved oxygen (DO) from two freshwater lakes namely Chini and Bera Lake (Malaysia). Data sample contained 11 parameters for water quality features from year 2005 until 2009. All data parameters were used to predicate the dissolved oxygen concentration which was dichotomized into 3 different levels (High, Medium, and Low). The input parameters were ranked, and forward selection method was applied to determine the optimum parameters that yield the lowest errors, and highest accuracy. Initial results showed that pH, water temperature, and conductivity are the most important parameters that significantly affect the predication of DO. Then, SVM model was applied using the Anova kernel with those parameters yielded 74% accuracy rate. We concluded that using SVM models to predicate the DO is feasible, and using dichotomized value of DO yields higher prediction accuracy than using precise DO value.Keywords: dissolved oxygen, water quality, predication DO, support vector machine
Procedia PDF Downloads 28811734 Probing Syntax Information in Word Representations with Deep Metric Learning
Authors: Bowen Ding, Yihao Kuang
Abstract:
In recent years, with the development of large-scale pre-trained lan-guage models, building vector representations of text through deep neural network models has become a standard practice for natural language processing tasks. From the performance on downstream tasks, we can know that the text representation constructed by these models contains linguistic information, but its encoding mode and extent are unclear. In this work, a structural probe is proposed to detect whether the vector representation produced by a deep neural network is embedded with a syntax tree. The probe is trained with the deep metric learning method, so that the distance between word vectors in the metric space it defines encodes the distance of words on the syntax tree, and the norm of word vectors encodes the depth of words on the syntax tree. The experiment results on ELMo and BERT show that the syntax tree is encoded in their parameters and the word representations they produce.Keywords: deep metric learning, syntax tree probing, natural language processing, word representations
Procedia PDF Downloads 6411733 Value Chain Analysis and Enhancement Added Value in Palm Oil Supply Chain
Authors: Juliza Hidayati, Sawarni Hasibuan
Abstract:
PT. XYZ is a manufacturing company that produces Crude Palm Oil (CPO). The fierce competition in the global markets not only between companies but also a competition between supply chains. This research aims to analyze the supply chain and value chain of Crude Palm Oil (CPO) in the company. Data analysis method used is qualitative analysis and quantitative analysis. The qualitative analysis describes supply chain and value chain, while the quantitative analysis is used to find out value added and the establishment of the value chain. Based on the analysis, the value chain of crude palm oil (CPO) in the company consists of four main actors that are suppliers of raw materials, processing, distributor, and customer. The value chain analysis consists of two actors; those are palm oil plantation and palm oil processing plant. The palm oil plantation activities include nurseries, planting, plant maintenance, harvesting, and shipping. The palm oil processing plant activities include reception, sterilizing, thressing, pressing, and oil classification. The value added of palm oil plantations was 72.42% and the palm oil processing plant was 10.13%.Keywords: palm oil, value chain, value added, supply chain
Procedia PDF Downloads 36911732 One Step Further: Pull-Process-Push Data Processing
Authors: Romeo Botes, Imelda Smit
Abstract:
In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list
Procedia PDF Downloads 24311731 Artificial Neural Network to Predict the Optimum Performance of Air Conditioners under Environmental Conditions in Saudi Arabia
Authors: Amr Sadek, Abdelrahaman Al-Qahtany, Turkey Salem Al-Qahtany
Abstract:
In this study, a backpropagation artificial neural network (ANN) model has been used to predict the cooling and heating capacities of air conditioners (AC) under different conditions. Sufficiently large measurement results were obtained from the national energy-efficiency laboratories in Saudi Arabia and were used for the learning process of the ANN model. The parameters affecting the performance of the AC, including temperature, humidity level, specific heat enthalpy indoors and outdoors, and the air volume flow rate of indoor units, have been considered. These parameters were used as inputs for the ANN model, while the cooling and heating capacity values were set as the targets. A backpropagation ANN model with two hidden layers and one output layer could successfully correlate the input parameters with the targets. The characteristics of the ANN model including the input-processing, transfer, neurons-distance, topology, and training functions have been discussed. The performance of the ANN model was monitored over the training epochs and assessed using the mean squared error function. The model was then used to predict the performance of the AC under conditions that were not included in the measurement results. The optimum performance of the AC was also predicted under the different environmental conditions in Saudi Arabia. The uncertainty of the ANN model predictions has been evaluated taking into account the randomness of the data and lack of learning.Keywords: artificial neural network, uncertainty of model predictions, efficiency of air conditioners, cooling and heating capacities
Procedia PDF Downloads 7211730 Multiobjective Optimization of Wastwater Treatment by Electrochemical Process
Authors: Malek Bendjaballah, Hacina Saidi, Sarra Hamidoud
Abstract:
The aim of this study is to model and optimize the performance of a new electrocoagulation (E.C) process for the treatment of wastewater as well as the energy consumption in order to extrapolate it to the industrial scale. Through judicious application of an experimental design (DOE), it has been possible to evaluate the individual effects and interactions that have a significant influence on both objective functions (maximizing efficiency and minimizing energy consumption) by using aluminum electrodes as sacrificial anode. Preliminary experiments have shown that the pH of the medium, the applied potential and the treatment time with E.C are the main parameters. A factorial design 33 has been adopted to model performance and energy consumption. Under optimal conditions, the pollution reduction efficiency is 93%, combined with a minimum energy consumption of 2.60.10-3 kWh / mg-COD. The potential or current applied and the processing time and their interaction were the most influential parameters in the mathematical models obtained. The results of the modeling were also correlated with the experimental ones. The results offer promising opportunities to develop a clean process and inexpensive technology to eliminate or reduce wastewater,Keywords: electrocoagulation, green process, experimental design, optimization
Procedia PDF Downloads 9411729 Optimum Er: YAG Laser Parameters for Orthodontic Composite Debonding: An in vitro Study
Authors: Mohammad Zamzam, Wesam Bachir, Imad Asaad
Abstract:
Several studies have produced estimates of Er:YAG laser parameters and specifications but there is still insufficient data for reliable selection of laser parameters. As a consequence, there is a heightened need for ideal specifications of Er:YAG laser to reduce the amount of enamel ablation. The objective of this paper is to investigate the influence of Er:YAG laser parameters, energy level and pulse duration, on orthodontic composite removal after bracket debonding. The sample consisted of 45 cuboids of orthodontic composite made by plastic moulds. The samples were divided into three groups, each was irradiated with Er:YAG laser set at different energy levels and three values for pulse durations (50 µs, 100 µs, and 300 µs). Geometrical parameters (depth and area) of cavities formed by laser irradiation were determined. ANCOVA test showed statistically significant difference (p < 0.0.5) between the groups indicating a potential effect of laser pulse duration on the geometrical parameters after controlling laser energy level. A post-hoc Bonferroni test ranked the 50µ Er:YAG laser pulse as the most influential factor for all geometrical parameters in removing remnant composite from enamel surface. Also, 300 mJ laser pulses caused the largest removal of the composite. The results of the present study demonstrated the efficacy of 50 µs and 300 mJ Er:YAG laser pulse for removal of remnant orthodontic composite.Keywords: enamel, Er:YAG, geometrical parameters, orthodontic composite, remnant composite
Procedia PDF Downloads 55011728 Quantitative Analysis of Multiprocessor Architectures for Radar Signal Processing
Authors: Deepak Kumar, Debasish Deb, Reena Mamgain
Abstract:
Radar signal processing requires high number crunching capability. Most often this is achieved using multiprocessor platform. Though multiprocessor platform provides the capability of meeting the real time computational challenges, the architecture of the same along with mapping of the algorithm on the architecture plays a vital role in efficiently using the platform. Towards this, along with standard performance metrics, few additional metrics are defined which helps in evaluating the multiprocessor platform along with the algorithm mapping. A generic multiprocessor architecture can not suit all the processing requirements. Depending on the system requirement and type of algorithms used, the most suitable architecture for the given problem is decided. In the paper, we study different architectures and quantify the different performance metrics which enables comparison of different architectures for their merit. We also carried out case study of different architectures and their efficiency depending on parallelism exploited on algorithm or data or both.Keywords: radar signal processing, multiprocessor architecture, efficiency, load imbalance, buffer requirement, pipeline, parallel, hybrid, cluster of processors (COPs)
Procedia PDF Downloads 40911727 The Influence of Concreteness on English Compound Noun Processing: Modulation of Constituent Transparency
Authors: Turgut Coskun
Abstract:
'Concreteness effect' refers to faster processing of concrete words and 'compound facilitation' refers to faster response to compounds. In this study, our main goal was to investigate the interaction between compound facilitation and concreteness effect. The latter might modulate compound processing basing on constituents’ transparency patterns. To evaluate these, we created lists for compound and monomorphemic words, sub-categorized them into concrete and abstract words, and further sub-categorized them basing on their transparency. The transparency conditions were opaque-opaque (OO), transparent-opaque (TO), and transparent-transparent (TT). We used RT data from English Lexicon Project (ELP) for our comparisons. The results showed the importance of concreteness factor (facilitation) in both compound and monomorphemic processing. Important for our present concern, separate concrete and abstract compound analyses revealed different patterns for OO, TO, and TT compounds. Concrete TT and TO conditions were processed faster than Concrete OO, Abstract OO and Abstract TT compounds, however, they weren’t processed faster than Abstract TO compounds. These results may reflect on different representation patterns of concrete and abstract compounds.Keywords: abstract word, compound representation, concrete word, constituent transparency, processing speed
Procedia PDF Downloads 19611726 Arousal, Encoding, And Intrusive Memories
Authors: Hannah Gutmann, Rick Richardson, Richard Bryant
Abstract:
Intrusive memories following a traumatic event are not uncommon. However, in some individuals, these memories become maladaptive and lead to prolonged stress reactions. A seminal model of PTSD explains that aberrant processing during trauma may lead to prolonged stress reactions and intrusive memories. This model explains that elevated arousal at the time of the trauma promotes data driven processing, leading to fragmented and intrusive memories. This study investigated the role of elevated arousal on the development of intrusive memories. We measured salivary markers of arousal and investigated what impact this had on data driven processing, memory fragmentation, and subsequently, the development of intrusive memories. We assessed 100 healthy participants to understand their processing style, arousal, and experience of intrusive memories. Participants were randomised to a control or experimental condition, the latter of which was designed to increase their arousal. Based on current theory, participants in the experimental condition were expected to engage in more data driven processing and experience more intrusive memories than participants in the control condition. This research aims to shed light on the mechanisms underlying the development of intrusive memories to illustrate ways in which therapeutic approaches for PTSD may be augmented for greater efficacy.Keywords: stress, cortisol, SAA, PTSD, intrusive memories
Procedia PDF Downloads 19611725 Reactive Learning about Food Waste Reduction in a Food Processing Plant in Gauteng Province, South Africa
Authors: Nesengani Elelwani Clinton
Abstract:
This paper presents reflective learning as an opportunity commonly available and used for food waste learning in a food processing company in the transition to sustainable and just food systems. In addressing how employees learn about food waste during food processing, the opportunities available for food waste learning were investigated. Reflective learning appeared to be the most used approach to learning about food waste. In the case of food waste learning, reflective learning was a response after employees wasted a substantial amount of food, where process controllers and team leaders would highlight the issue to employees who wasted food and explain how food waste could be reduced. This showed that learning about food waste is not proactive, and there continues to be a lack of structured learning around food waste. Several challenges were highlighted around reflective learning about food waste. Some of the challenges included understanding the language, lack of interest from employees, set times to reach production targets, and working pressures. These challenges were reported to be hindering factors in understanding food waste learning, which is not structured. A need was identified for proactive learning through structured methods. This is because it was discovered that in the plant, where food processing activities happen, the signage and posters that are there are directly related to other sustainability issues such as food safety and health. This indicated that there are low levels of awareness about food waste. Therefore, this paper argues that food waste learning should be proactive. The proactive learning approach should include structured learning materials around food waste during food processing. In the structuring of the learning materials, individual trainers should be multilingual. This will make it possible for those who do not understand English to understand in their own language. And lastly, there should be signage and posters in the food processing plant around food waste. This will bring more awareness around food waste, and employees' behaviour can be influenced by the posters and signage in the food processing plant. Thus, will enable a transition to a just and sustainable food system.Keywords: sustainable and just food systems, food waste, food waste learning, reflective learning approach
Procedia PDF Downloads 12711724 Parametric Dependence of the Advection-Diffusion Equation in Two Dimensions
Authors: Matheus Fernando Pereira, Varese Salvador Timoteo
Abstract:
In this work, we have solved the two-dimensional advection-diffusion equation numerically for a spatially dependent solute dispersion along non-uniform flow with a pulse type source in order to make a systematic study on the influence of medium heterogeneity, initial flow velocity, and initial dispersion coefficient parameters on the solutions of the equation. The behavior of the solutions is then investigated as we change the three parameters independently. Our results show that even though the parameters represent different physical features of the system, the effect on their variation is very similar. We also observe that the effects caused by the parameters on the concentration depend on the distance from the source. Finally, our numerical results are in good agreement with the exact solutions for all values of the parameters we used in our analysis.Keywords: advection-diffusion equation, dispersion, numerical methods, pulse-type source
Procedia PDF Downloads 23811723 Validation of Escherichia coli O157:H7 Inactivation on Apple-Carrot Juice Treated with Manothermosonication by Kinetic Models
Authors: Ozan Kahraman, Hao Feng
Abstract:
Several models such as Weibull, Modified Gompertz, Biphasic linear, and Log-logistic models have been proposed in order to describe non-linear inactivation kinetics and used to fit non-linear inactivation data of several microorganisms for inactivation by heat, high pressure processing or pulsed electric field. First-order kinetic parameters (D-values and z-values) have often been used in order to identify microbial inactivation by non-thermal processing methods such as ultrasound. Most ultrasonic inactivation studies employed first-order kinetic parameters (D-values and z-values) in order to describe the reduction on microbial survival count. This study was conducted to analyze the E. coli O157:H7 inactivation data by using five microbial survival models (First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic). First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic kinetic models were used for fitting inactivation curves of Escherichia coli O157:H7. The residual sum of squares and the total sum of squares criteria were used to evaluate the models. The statistical indices of the kinetic models were used to fit inactivation data for E. coli O157:H7 by MTS at three temperatures (40, 50, and 60 0C) and three pressures (100, 200, and 300 kPa). Based on the statistical indices and visual observations, the Weibull and Biphasic models were best fitting of the data for MTS treatment as shown by high R2 values. The non-linear kinetic models, including the Modified Gompertz, First-order, and Log-logistic models did not provide any better fit to data from MTS compared the Weibull and Biphasic models. It was observed that the data found in this study did not follow the first-order kinetics. It is possibly because of the cells which are sensitive to ultrasound treatment were inactivated first, resulting in a fast inactivation period, while those resistant to ultrasound were killed slowly. The Weibull and biphasic models were found as more flexible in order to determine the survival curves of E. coli O157:H7 treated by MTS on apple-carrot juice.Keywords: Weibull, Biphasic, MTS, kinetic models, E.coli O157:H7
Procedia PDF Downloads 36111722 Effect of Soil and Material Characteristics on Safety of Concrete Structures Including SSI
Authors: A. E. Kurtoglu, A. Cevik, M. Bilgehan
Abstract:
In this parametric study, effect of soil and material characteristics on safety of structures is investigated. The soil parameters such as shear strength, unit weight; geometrical parameters of the structure such as foundation depth and height of building; and material properties such as weight of concrete were selected as input parameters. A real accelerogram of 1989 El-Centro earthquake recorded by the USGS in Imperial Valley is used for this study. It is contained in the standard Strong Motion CD-ROM (SMC) format, which can be recognized and interpreted by FEM software used. The soil-structure interaction model subjected to above-mentioned earthquake was analyzed for 729 cases. Effect of input parameters on safety factor of the soil-structure system was then investigated and the interaction between the input and output parameters is presented in graphical form. Findings showed that all input parameters have significant effects on factor of safety results.Keywords: factor of safety, finite element method, safety of structures, soil structure interaction
Procedia PDF Downloads 50411721 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance
Authors: Emad Alenany, M. Adel El-Baz
Abstract:
In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.Keywords: queueing network, discrete-event simulation, health applications, SPT
Procedia PDF Downloads 18511720 Detecting and Disabling Digital Cameras Using D3CIP Algorithm Based on Image Processing
Authors: S. Vignesh, K. S. Rangasamy
Abstract:
The paper deals with the device capable of detecting and disabling digital cameras. The system locates the camera and then neutralizes it. Every digital camera has an image sensor known as a CCD, which is retro-reflective and sends light back directly to its original source at the same angle. The device shines infrared LED light, which is invisible to the human eye, at a distance of about 20 feet. It then collects video of these reflections with a camcorder. Then the video of the reflections is transferred to a computer connected to the device, where it is sent through image processing algorithms that pick out infrared light bouncing back. Once the camera is detected, the device would project an invisible infrared laser into the camera's lens, thereby overexposing the photo and rendering it useless. Low levels of infrared laser neutralize digital cameras but are neither a health danger to humans nor a physical damage to cameras. We also discuss the simplified design of the above device that can used in theatres to prevent piracy. The domains being covered here are optics and image processing.Keywords: CCD, optics, image processing, D3CIP
Procedia PDF Downloads 35511719 Security in Resource Constraints: Network Energy Efficient Encryption
Authors: Mona Almansoori, Ahmed Mustafa, Ahmad Elshamy
Abstract:
Wireless nodes in a sensor network gather and process critical information designed to process and communicate, information flooding through such network is critical for decision making and data processing, the integrity of such data is one of the most critical factors in wireless security without compromising the processing and transmission capability of the network. This paper presents mechanism to securely transmit data over a chain of sensor nodes without compromising the throughput of the network utilizing available battery resources available at the sensor node.Keywords: hybrid protocol, data integrity, lightweight encryption, neighbor based key sharing, sensor node data processing, Z-MAC
Procedia PDF Downloads 14311718 Multi-Spectral Medical Images Enhancement Using a Weber’s law
Authors: Muna F. Al-Sammaraie
Abstract:
The aim of this research is to present a multi spectral image enhancement methods used to achieve highly real digital image populates only a small portion of the available range of digital values. Also, a quantitative measure of image enhancement is presented. This measure is related with concepts of the Webers Low of the human visual system. For decades, several image enhancement techniques have been proposed. Although most techniques require profuse amount of advance and critical steps, the result for the perceive image are not as satisfied. This study involves changing the original values so that more of the available range is used; then increases the contrast between features and their backgrounds. It consists of reading the binary image on the basis of pixels taking them byte-wise and displaying it, calculating the statistics of an image, automatically enhancing the color of the image based on statistics calculation using algorithms and working with RGB color bands. Finally, the enhanced image is displayed along with image histogram. A number of experimental results illustrated the performance of these algorithms. Particularly the quantitative measure has helped to select optimal processing parameters: the best parameters and transform.Keywords: image enhancement, multi-spectral, RGB, histogram
Procedia PDF Downloads 32611717 Image Processing and Calculation of NGRDI Embedded System in Raspberry
Authors: Efren Lopez Jimenez, Maria Isabel Cajero, J. Irving-Vasqueza
Abstract:
The use and processing of digital images have opened up new opportunities for the resolution of problems of various kinds, such as the calculation of different vegetation indexes, among other things, differentiating healthy vegetation from humid vegetation. However, obtaining images from which these indexes are calculated is still the exclusive subject of active research. In the present work, we propose to obtain these images using a low cost embedded system (Raspberry Pi) and its processing, using a set of libraries of open code called OpenCV, in order to obtain the Normalized Red-Green Difference Index (NGRDI).Keywords: Raspberry Pi, vegetation index, Normalized Red-Green Difference Index (NGRDI), OpenCV
Procedia PDF Downloads 289