Search results for: supply chain delivery models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11604

Search results for: supply chain delivery models

9264 Understanding the Role of Gas Hydrate Morphology on the Producibility of a Hydrate-Bearing Reservoir

Authors: David Lall, Vikram Vishal, P. G. Ranjith

Abstract:

Numerical modeling of gas production from hydrate-bearing reservoirs requires the solution of various thermal, hydrological, chemical, and mechanical phenomena in a coupled manner. Among the various reservoir properties that influence gas production estimates, the distribution of permeability across the domain is one of the most crucial parameters since it determines both heat transfer and mass transfer. The aspect of permeability in hydrate-bearing reservoirs is particularly complex compared to conventional reservoirs since it depends on the saturation of gas hydrates and hence, is dynamic during production. The dependence of permeability on hydrate saturation is mathematically represented using permeability-reduction models, which are specific to the expected morphology of hydrate accumulations (such as grain-coating or pore-filling hydrates). In this study, we demonstrate the impact of various permeability-reduction models, and consequently, different morphologies of hydrate deposits on the estimates of gas production using depressurization at the reservoir scale. We observe significant differences in produced water volumes and cumulative mass of produced gas between the models, thereby highlighting the uncertainty in production behavior arising from the ambiguity in the prevalent gas hydrate morphology.

Keywords: gas hydrate morphology, multi-scale modeling, THMC, fluid flow in porous media

Procedia PDF Downloads 213
9263 Hybrid Direct Numerical Simulation and Large Eddy Simulating Wall Models Approach for the Analysis of Turbulence Entropy

Authors: Samuel Ahamefula

Abstract:

Turbulent motion is a highly nonlinear and complex phenomenon, and its modelling is still very challenging. In this study, we developed a hybrid computational approach to accurately simulate fluid turbulence phenomenon. The focus is coupling and transitioning between Direct Numerical Simulation (DNS) and Large Eddy Simulating Wall Models (LES-WM) regions. In the framework, high-order fidelity fluid dynamical methods are utilized to simulate the unsteady compressible Navier-Stokes equations in the Eulerian format on the unstructured moving grids. The coupling and transitioning of DNS and LES-WM are conducted through the linearly staggered Dirichlet-Neumann coupling scheme. The high-fidelity framework is verified and validated based on namely, DNS ability for capture full range of turbulent scales, giving accurate results and LES-WM efficiency in simulating near-wall turbulent boundary layer by using wall models.

Keywords: computational methods, turbulence modelling, turbulence entropy, navier-stokes equations

Procedia PDF Downloads 96
9262 Uplifting Citizens Participation: A Gov 2.0 Framework

Authors: Mohammed Aladalah

Abstract:

The emergence of digital citizens is no longer mere speculation; therefore, governments’ use of Web 2.0 tools (hereafter Gov 2.0) should be a part of all current and future e-government plans. The potential of Gov 2.0 to facilitate greater communication, participation, and collaboration with citizens has been highlighted and discussed extensively in recent literature. However, the current levels of citizens’ participation in Gov 2.0 have not lived up to the hype. Therefore, governments need to rethink the way in which they implement Gov 2.0, and take advantage of the digitally-engaged population. We propose a two-dimensional framework to tackle this issue: first, on the supply side, governments tend to use Gov 2.0 mainly for the dissemination of information and for self-promotion without the desire to encourage any interaction with citizens; this is due to many reasons, including the lack of time and the possibility of loss of control. The second dimension of the framework is the demand side; citizens are unwilling to participate in Gov 2.0 activities because they do not perceive its value or trust the government. We attempt to consider the elements of both supply and demand in order to provide a comprehensive solution whereby the potential of Gov 2.0 can be fully utilized. Our framework is based on the theoretical foundation of service science and value co-creation theory. This paper makes two significant contributions: (a) it provides an initial framework intended to increase citizens’ participation in Gov 2.0; and (b) it enhances the understanding of the government’s Gov 2.0 applications, particularly in terms of factors that ensure their attractiveness for citizens. This work is the first step in a comprehensive research undertaking, the purpose of which is to study public’s engagement with the Gov 2.0 concept. It contributes to providing a better understanding of e-government and its future.

Keywords: e-government, Gov 2.0, citizens participation, digital citizen

Procedia PDF Downloads 330
9261 Comparison of Spiking Neuron Models in Terms of Biological Neuron Behaviours

Authors: Fikret Yalcinkaya, Hamza Unsal

Abstract:

To understand how neurons work, it is required to combine experimental studies on neural science with numerical simulations of neuron models in a computer environment. In this regard, the simplicity and applicability of spiking neuron modeling functions have been of great interest in computational neuron science and numerical neuroscience in recent years. Spiking neuron models can be classified by exhibiting various neuronal behaviors, such as spiking and bursting. These classifications are important for researchers working on theoretical neuroscience. In this paper, three different spiking neuron models; Izhikevich, Adaptive Exponential Integrate Fire (AEIF) and Hindmarsh Rose (HR), which are based on first order differential equations, are discussed and compared. First, the physical meanings, derivatives, and differential equations of each model are provided and simulated in the Matlab environment. Then, by selecting appropriate parameters, the models were visually examined in the Matlab environment and it was aimed to demonstrate which model can simulate well-known biological neuron behaviours such as Tonic Spiking, Tonic Bursting, Mixed Mode Firing, Spike Frequency Adaptation, Resonator and Integrator. As a result, the Izhikevich model has been shown to perform Regular Spiking, Continuous Explosion, Intrinsically Bursting, Thalmo Cortical, Low-Threshold Spiking and Resonator. The Adaptive Exponential Integrate Fire model has been able to produce firing patterns such as Regular Ignition, Adaptive Ignition, Initially Explosive Ignition, Regular Explosive Ignition, Delayed Ignition, Delayed Regular Explosive Ignition, Temporary Ignition and Irregular Ignition. The Hindmarsh Rose model showed three different dynamic neuron behaviours; Spike, Burst and Chaotic. From these results, the Izhikevich cell model may be preferred due to its ability to reflect the true behavior of the nerve cell, the ability to produce different types of spikes, and the suitability for use in larger scale brain models. The most important reason for choosing the Adaptive Exponential Integrate Fire model is that it can create rich ignition patterns with fewer parameters. The chaotic behaviours of the Hindmarsh Rose neuron model, like some chaotic systems, is thought to be used in many scientific and engineering applications such as physics, secure communication and signal processing.

Keywords: Izhikevich, adaptive exponential integrate fire, Hindmarsh Rose, biological neuron behaviours, spiking neuron models

Procedia PDF Downloads 175
9260 Reverse Logistics Network Optimization for E-Commerce

Authors: Albert W. K. Tan

Abstract:

This research consolidates a comprehensive array of publications from peer-reviewed journals, case studies, and seminar reports focused on reverse logistics and network design. By synthesizing this secondary knowledge, our objective is to identify and articulate key decision factors crucial to reverse logistics network design for e-commerce. Through this exploration, we aim to present a refined mathematical model that offers valuable insights for companies seeking to optimize their reverse logistics operations. The primary goal of this research endeavor is to develop a comprehensive framework tailored to advising organizations and companies on crafting effective networks for their reverse logistics operations, thereby facilitating the achievement of their organizational goals. This involves a thorough examination of various network configurations, weighing their advantages and disadvantages to ensure alignment with specific business objectives. The key objectives of this research include: (i) Identifying pivotal factors pertinent to network design decisions within the realm of reverse logistics across diverse supply chains. (ii) Formulating a structured framework designed to offer informed recommendations for sound network design decisions applicable to relevant industries and scenarios. (iii) Propose a mathematical model to optimize its reverse logistics network. A conceptual framework for designing a reverse logistics network has been developed through a combination of insights from the literature review and information gathered from company websites. This framework encompasses four key stages in the selection of reverse logistics operations modes: (1) Collection, (2) Sorting and testing, (3) Processing, and (4) Storage. Key factors to consider in reverse logistics network design: I) Centralized vs. decentralized processing: Centralized processing, a long-standing practice in reverse logistics, has recently gained greater attention from manufacturing companies. In this system, all products within the reverse logistics pipeline are brought to a central facility for sorting, processing, and subsequent shipment to their next destinations. Centralization offers the advantage of efficiently managing the reverse logistics flow, potentially leading to increased revenues from returned items. Moreover, it aids in determining the most appropriate reverse channel for handling returns. On the contrary, a decentralized system is more suitable when products are returned directly from consumers to retailers. In this scenario, individual sales outlets serve as gatekeepers for processing returns. Considerations encompass the product lifecycle, product value and cost, return volume, and the geographic distribution of returns. II) In-house vs. third-party logistics providers: The decision between insourcing and outsourcing in reverse logistics network design is pivotal. In insourcing, a company handles the entire reverse logistics process, including material reuse. In contrast, outsourcing involves third-party providers taking on various aspects of reverse logistics. Companies may choose outsourcing due to resource constraints or lack of expertise, with the extent of outsourcing varying based on factors such as personnel skills and cost considerations. Based on the conceptual framework, the authors have constructed a mathematical model that optimizes reverse logistics network design decisions. The model will consider key factors identified in the framework, such as transportation costs, facility capacities, and lead times. The authors have employed mixed LP to find the optimal solutions that minimize costs while meeting organizational objectives.

Keywords: reverse logistics, supply chain management, optimization, e-commerce

Procedia PDF Downloads 34
9259 Water Footprint for the Palm Oil Industry in Malaysia

Authors: Vijaya Subramaniam, Loh Soh Kheang, Astimar Abdul Aziz

Abstract:

Water footprint (WFP) has gained importance due to the increase in water scarcity in the world. This study analyses the WFP for an agriculture sector, i.e., the oil palm supply chain, which produces oil palm fresh fruit bunch (FFB), crude palm oil, palm kernel, and crude palm kernel oil. The water accounting and vulnerability evaluation (WAVE) method was used. This method analyses the water depletion index (WDI) based on the local blue water scarcity. The main contribution towards the WFP at the plantation was the production of FFB from the crop itself at 0.23m³/tonne FFB. At the mill, the burden shifts to the water added during the process, which consists of the boiler and process water, which accounted for 6.91m³/tonne crude palm oil. There was a 33% reduction in the WFP when there was no dilution or water addition after the screw press at the mill. When allocation was performed, the WFP reduced by 42% as the burden was shared with the palm kernel and palm kernel shell. At the kernel crushing plant (KCP), the main contributor towards the WFP 4.96 m³/tonne crude palm kernel oil which came from the palm kernel which carried the burden from upstream followed by electricity, 0.33 m³/tonne crude palm kernel oil used for the process and 0.08 m³/tonne crude palm kernel oil for transportation of the palm kernel. A comparison was carried out for mills with biogas capture versus no biogas capture, and the WFP had no difference for both scenarios. The comparison when the KCPs operate in the proximity of mills as compared to those operating in the proximity of ports only gave a reduction of 6% for the WFP. Both these scenarios showed no difference and insignificant difference, which differed from previous life cycle assessment studies on the carbon footprint, which showed significant differences. This shows that findings change when only certain impact categories are focused on. It can be concluded that the impact from the water used by the oil palm tree is low due to the practice of no irrigation at the plantations and the high availability of water from rainfall in Malaysia. This reiterates the importance of planting oil palm trees in regions with high rainfall all year long, like the tropics. The milling stage had the most significant impact on the WFP. Mills should avoid dilution to reduce this impact.

Keywords: life cycle assessment, water footprint, crude palm oil, crude palm kernel oil, WAVE method

Procedia PDF Downloads 167
9258 Aggregate Production Planning Framework in a Multi-Product Factory: A Case Study

Authors: Ignatio Madanhire, Charles Mbohwa

Abstract:

This study looks at the best model of aggregate planning activity in an industrial entity and uses the trial and error method on spreadsheets to solve aggregate production planning problems. Also linear programming model is introduced to optimize the aggregate production planning problem. Application of the models in a furniture production firm is evaluated to demonstrate that practical and beneficial solutions can be obtained from the models. Finally some benchmarking of other furniture manufacturing industries was undertaken to assess relevance and level of use in other furniture firms

Keywords: aggregate production planning, trial and error, linear programming, furniture industry

Procedia PDF Downloads 551
9257 Optimization of Energy Harvesting Systems for RFID Applications

Authors: P. Chambe, B. Canova, A. Balabanian, M. Pele, N. Coeur

Abstract:

To avoid battery assisted tags with limited lifetime batteries, it is proposed here to replace them by energy harvesting systems, able to feed from local environment. This would allow total independence to RFID systems, very interesting for applications where tag removal from its location is not possible. Example is here described for luggage safety in airports, and is easily extendable to similar situation in terms of operation constraints. The idea is to fix RFID tag with energy harvesting system not only to identify luggage but also to supply an embedded microcontroller with a sensor delivering luggage weight making it impossible to add or to remove anything from the luggage during transit phases. The aim is to optimize the harvested energy for such RFID applications, and to study in which limits these applications are theoretically possible. Proposed energy harvester is based on two energy sources: piezoelectricity and electromagnetic waves, so that when the luggage is moving on ground transportation to airline counters, the piezo module supplies the tag and its microcontroller, while the RF module operates during luggage transit thanks to readers located along the way. Tag location on the luggage is analyzed to get best vibrations, as well as harvester better choice for optimizing the energy supply depending on applications and the amount of energy harvested during a period of time. Effects of system parameters (RFID UHF frequencies, limit distance between the tag and the antenna necessary to harvest energy, produced voltage and voltage threshold) are discussed and working conditions for such system are delimited.

Keywords: RFID tag, energy harvesting, piezoelectric, EM waves

Procedia PDF Downloads 447
9256 Machine Learning Techniques for Estimating Ground Motion Parameters

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.

Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine

Procedia PDF Downloads 120
9255 Analysis of Consumer Preferences for Housing in Saudi Arabia

Authors: Mohammad Abdulaziz Algrnas, Emma Mulliner

Abstract:

Housing projects have been established in Saudi Arabia, by both government and private construction companies, to meet the increasing demand from Saudi inhabitants across the country. However, the real estate market supply does not meet consumer preference requirements. Preferences normally differ depending on the consumer’s situation, such as the household’s sociological characteristics (age, household size and composition), resources (income, wealth, information and experience), tastes and priorities. Collecting information about consumer attitudes, preferences and perceptions is important for the real estate market in order to better understand housing demand and to ensure that this is met by appropriate supply. The aim of this paper is to identify consumer preferences for housing in Saudi Arabia. A quantitative closed-ended questionnaire was conducted with housing consumers in Saudi Arabia in order to gain insight into consumer needs, current household situation, preferences for a number of investigated housing attributes and consumers’ perceptions around the current housing problem. 752 survey responses were obtained and analysed in order to describe preferences for housing attributes and make comparisons between groups. Factor analysis was also conducted to identify and reduce the attributes. The results indicate a difference in preference according to the gender of the respondents and depending on their region of residence.

Keywords: housing attributes, Saudi Arabia, consumer preferences, housing preferences

Procedia PDF Downloads 535
9254 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error

Procedia PDF Downloads 137
9253 Formulation and Invivo Evaluation of Salmeterol Xinafoate Loaded MDI for Asthma Using Response Surface Methodology

Authors: Paresh Patel, Priya Patel, Vaidehi Sorathiya, Navin Sheth

Abstract:

The aim of present work was to fabricate Salmeterol Xinafoate (SX) metered dose inhaler (MDI) for asthma and to evaluate the SX loaded solid lipid nanoparticles (SLNs) for pulmonary delivery. Solid lipid nanoparticles can be used to deliver particles to the lungs via MDI. A modified solvent emulsification diffusion technique was used to prepare Salmeterol Xinafoate loaded solid lipid nanoparticles by using compritol 888 ATO as lipid, tween 80 as surfactant, D-mannitol as cryoprotecting agent and L-leucine was used to improve aerosolization behaviour. Box-Behnken design was applied with 17 runs. 3-D surface response plots and contour plots were drawn and optimized formulation was selected based on minimum particle size and maximum % EE. % yield, in vitro diffusion study, scanning electron microscopy, X-ray diffraction, DSC, FTIR also characterized. Particle size, zeta potential analyzed by Zetatrac particle size analyzer and aerodynamic properties was carried out by cascade impactor. Pre convulsion time was examined for control group, treatment group and compare with marketed group. MDI was evaluated for leakage test, flammability test, spray test and content per puff. By experimental design, particle size and % EE found to be in range between 119-337 nm and 62.04-76.77% by solvent emulsification diffusion technique. Morphologically, particles have spherical shape and uniform distribution. DSC & FTIR study showed that no interaction between drug and excipients. Zeta potential shows good stability of SLNs. % respirable fraction found to be 52.78% indicating reach to the deep part of lung such as alveoli. Animal study showed that fabricated MDI protect the lungs against histamine induced bronchospasm in guinea pigs. MDI showed sphericity of particle in spray pattern, 96.34% content per puff and non-flammable. SLNs prepared by Solvent emulsification diffusion technique provide desirable size for deposition into the alveoli. This delivery platform opens up a wide range of treatment application of pulmonary disease like asthma via solid lipid nanoparticles.

Keywords: salmeterol xinafoate, solid lipid nanoparticles, box-behnken design, solvent emulsification diffusion technique, pulmonary delivery

Procedia PDF Downloads 450
9252 Comparison of Acetylcholinesterase Reactivators Cytotoxicity with Their Structure

Authors: Lubica Muckova, Petr Jost, Jaroslav Pejchal, Daniel Jun

Abstract:

The development of acetylcholinesterase reactivators, i.e. antidotes against organophosphorus poisoning, is an important goal of defence research. The aim of this study was to compare cytotoxicity and chemical structure of 5 currently available (pralidoxime, trimedoxime, obidoxime, methoxime, and asoxime) and 4 newly developed compounds (K027, K074, K075, and K203). In oximes, there could be at least four important structural factors affecting their toxicity, including the number of oxime groups in the molecule, the position of oxime group(s) on pyridinium ring, the length of carbon linker, and the substitution by oxygen or insertion of the double bond into the connection chain. The cytotoxicity of tested substances was measured using colorimetric 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide assay (MTT assay) in SH-SY5Y cell line. Toxicity was expressed as toxicological index IC₅₀. The tested compounds showed different cytotoxicity ranging from 1.5 to 27 mM. K027 was the least, and methoxime was the most toxic reactivator. The lowest toxicity was found in a monopyridinium reactivator and bispyridinium reactivators with simple 3C carbon linker. Shortening of connection chain length to 1C, incorporation of oxygen moiety into 3C compounds, elongation of carbon linker to 4C and insertion of a double bond into 4C substances increase AChE reactivators' cytotoxicity. Acknowledgements: This work was supported by a long-term organization development plan Medical Aspects of Weapons of Mass Destruction of the Faculty of Military Health Sciences, University of Defence.

Keywords: acetylcholinesterase, cytotoxicity, organophosphorus poisoning, reactivators of acetylcholinesterase

Procedia PDF Downloads 303
9251 Changing Skills with the Transformation of Procurement Function

Authors: Ömer Faruk Ada, Türker Baş, M. Yaman Öztek

Abstract:

In this study, we aim to investigate the skills to be owned by procurement professionals in order to fulfill their developing and changing role completely. Market conditions, competitive pressure, and high financial costs make it more important than ever for organizations to be able to use resources more efficiently. Research shows that procurement expenses consist more than 50 % of the operating expenses. With increasing profit impact of procurement, reviewing the position of the procurement function within the organization has become inevitable. This study is significant as it indicates the necessary skills that procurement professionals must have to keep in step with the transformation of procurement units from transaction oriented to value chain oriented. In this study, the transformation of procurement is investigated from the perspective of procurement professionals and we aim to answer following research questions: • How do procurement professionals perceive their role within the organization? • How has their role changed and what challenges have they had to face? • What portfolio of skills do they believe will enable them to fulfill their role effectively? Literature review consists of the first part of the study by investigating the changing role of procurement from different perspectives. In the second part, we present the results of the in-depth interviews with 15 procurement professionals and we used descriptive analysis as a methodology. In the light of these results, we classified procurement skills under operational, tactical and strategic levels and Procurement Skills Framework has been developed. This study shows the differences in the perception of purchasing by professionals and the organizations. The differences in the perception are considered as an important barrier beyond the procurement transformation. Although having the necessary skills has a significant effect for procurement professionals to fulfill their role completely and keep in step with the transformation of the procurement function, It is not the only factor and the degree of high-level management and organizational support has also a direct impact during this transformation.

Keywords: procuement skills, procurement transformation, strategic procurement, value chain

Procedia PDF Downloads 412
9250 Structural Evidence of the Conversion of Nitric Oxide (NO) to Nitrite Ion (NO2‾) by Lactoperoxidase (LPO): Structure of the Complex of LPO with NO2‾ at 1.89å Resolution

Authors: V. Viswanathan, Md. Irshad Ahmad, Prashant K. Singh, Nayeem Ahmad, Pradeep Sharma, Sujata Sharma, Tej P Singh

Abstract:

Lactoperoxidase (LPO) is a heme containing mammalian enzyme which uses hydrogen peroxide (H2O2) to catalyze the conversion of substrates into oxidized products. LPO is found in body fluids and tissues such as milk, saliva, tears, mucosa and other body secretions. The previous structural studies have shown that LPO converts substrates, thiocyanate (SCN‾) and iodide (I‾) ions into oxidized products, hypothiocyanite (OSCN‾) and hypoiodite (IO‾) ions, respectively. We report here a new structure of the complex of LPO with an oxidized product, nitrite (NO2‾). This product was generated from NO using the two step reaction of LPO by adding hydrogen peroxide (H2O2) in the solution of LPO in 0.1M phosphate buffer at pH 6.8 as the first step. In the second step, NO gas was added to the above mixture. This was crystallized using 20% (w/v) PEG-3350 and 0.2M ammonium iodide at pH 6.8. The structure determination showed the presence of NO2‾ ion in the distal heme cavity of the substrate binding site of LPO. The structure also showed that the propionate group, which is linked to pyrrole ring D of the heme moiety, was disordered. Similarly, the side chain of Asp108, which is covalently linked to heme moiety, was also split into two components. As a result of these changes, the conformation of the side chain of Arg255 was altered, allowing it to form new interactions with the disordered carboxylic group of propionate moiety. These structural changes are indicative of an intermediate state in the catalytic reaction pathway of LPO.

Keywords: lactoperoxidase, structure, nitric oxide, nitrite ion, intermediate, complex

Procedia PDF Downloads 100
9249 HLB Disease Detection in Omani Lime Trees using Hyperspectral Imaging Based Techniques

Authors: Jacintha Menezes, Ramalingam Dharmalingam, Palaiahnakote Shivakumara

Abstract:

In the recent years, Omani acid lime cultivation and production has been affected by Citrus greening or Huanglongbing (HLB) disease. HLB disease is one of the most destructive diseases for citrus, with no remedies or countermeasures to stop the disease. Currently used Polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA) HLB detection tests require lengthy and labor-intensive laboratory procedures. Furthermore, the equipment and staff needed to carry out the laboratory procedures are frequently specialized hence making them a less optimal solution for the detection of the disease. The current research uses hyperspectral imaging technology for automatic detection of citrus trees with HLB disease. Omani citrus tree leaf images were captured through portable Specim IQ hyperspectral camera. The research considered healthy, nutrition deficient, and HLB infected leaf samples based on the Polymerase chain reaction (PCR) test. The highresolution image samples were sliced to into sub cubes. The sub cubes were further processed to obtain RGB images with spatial features. Similarly, RGB spectral slices were obtained through a moving window on the wavelength. The resized spectral-Spatial RGB images were given to Convolution Neural Networks for deep features extraction. The current research was able to classify a given sample to the appropriate class with 92.86% accuracy indicating the effectiveness of the proposed techniques. The significant bands with a difference in three types of leaves are found to be 560nm, 678nm, 726 nm and 750nm.

Keywords: huanglongbing (HLB), hyperspectral imaging (HSI), · omani citrus, CNN

Procedia PDF Downloads 72
9248 Using Machine Learning to Classify Different Body Parts and Determine Healthiness

Authors: Zachary Pan

Abstract:

Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.

Keywords: body part, healthcare, machine learning, neural networks

Procedia PDF Downloads 95
9247 Review of Hydrologic Applications of Conceptual Models for Precipitation-Runoff Process

Authors: Oluwatosin Olofintoye, Josiah Adeyemo, Gbemileke Shomade

Abstract:

The relationship between rainfall and runoff is an important issue in surface water hydrology therefore the understanding and development of accurate rainfall-runoff models and their applications in water resources planning, management and operation are of paramount importance in hydrological studies. This paper reviews some of the previous works on the rainfall-runoff process modeling. The hydrologic applications of conceptual models and artificial neural networks (ANNs) for the precipitation-runoff process modeling were studied. Gradient training methods such as error back-propagation (BP) and evolutionary algorithms (EAs) are discussed in relation to the training of artificial neural networks and it is shown that application of EAs to artificial neural networks training could be an alternative to other training methods. Therefore, further research interest to exploit the abundant expert knowledge in the area of artificial intelligence for the solution of hydrologic and water resources planning and management problems is needed.

Keywords: artificial intelligence, artificial neural networks, evolutionary algorithms, gradient training method, rainfall-runoff model

Procedia PDF Downloads 447
9246 The Effect of Symmetry on the Perception of Happiness and Boredom in Design Products

Authors: Michele Sinico

Abstract:

The present research investigates the effect of symmetry on the perception of happiness and boredom in design products. Three experiments were carried out in order to verify the degree of the visual expressive value on different models of bookcases, wall clocks, and chairs. 60 participants directly indicated the degree of happiness and boredom using 7-point rating scales. The findings show that the participants acknowledged a different value of expressive quality in the different product models. Results show also that symmetry is not a significant constraint for an emotional design project.

Keywords: product experience, emotional design, symmetry, expressive qualities

Procedia PDF Downloads 146
9245 Airliner-UAV Flight Formation in Climb Regime

Authors: Pavel Zikmund, Robert Popela

Abstract:

Extreme formation is a theoretical concept of self-sustain flight when a big Airliner is followed by a small UAV glider flying in airliner’s wake vortex. The paper presents results of climb analysis with a goal to lift the gliding UAV to airliner’s cruise altitude. Wake vortex models, the UAV drag polar and basic parameters and airliner’s climb profile are introduced at first. Then, flight performance of the UAV in the wake vortex is evaluated by analytical methods. Time history of optimal distance between the airliner and the UAV during the climb is determined. The results are encouraging, therefore available UAV drag margin for electricity generation is figured out for different vortex models.

Keywords: flight in formation, self-sustained flight, UAV, wake vortex

Procedia PDF Downloads 435
9244 Towards Sustainable Evolution of Bioeconomy: The Role of Technology and Innovation Management

Authors: Ronald Orth, Johanna Haunschild, Sara Tsog

Abstract:

The bioeconomy is an inter- and cross-disciplinary field covering a large number and wide scope of existing and emerging technologies. It has a great potential to contribute to the transformation process of industry landscape and ultimately drive the economy towards sustainability. However, bioeconomy per se is not necessarily sustainable and technology should be seen as an enabler rather than panacea to all our ecological, social and economic issues. Therefore, to draw and maximize benefits from bioeconomy in terms of sustainability, we propose that innovative activities should encompass not only novel technologies and bio-based new materials but also multifocal innovations. For multifocal innovation endeavors, innovation management plays a substantial role, as any innovation emerges in a complex iterative process where communication and knowledge exchange among relevant stake holders has a pivotal role. The knowledge generation and innovation are although at the core of transition towards a more sustainable bio-based economy, to date, there is a significant lack of concepts and models that approach bioeconomy from the innovation management approach. The aim of this paper is therefore two-fold. First, it inspects the role of transformative approach in the adaptation of bioeconomy that contributes to the environmental, ecological, social and economic sustainability. Second, it elaborates the importance of technology and innovation management as a tool for smooth, prompt and effective transition of firms to the bioeconomy. We conduct a qualitative literature study on the sustainability challenges that bioeconomy entails thus far using Science Citation Index and based on grey literature, as major economies e.g. EU, USA, China and Brazil have pledged to adopt bioeconomy and have released extensive publications on the topic. We will draw an example on the forest based business sector that is transforming towards the new green economy more rapidly as expected, although this sector has a long-established conventional business culture with consolidated and fully fledged industry. Based on our analysis we found that a successful transition to sustainable bioeconomy is conditioned on heterogenous and contested factors in terms of stakeholders , activities and modes of innovation. In addition, multifocal innovations occur when actors from interdisciplinary fields engage in intensive and continuous interaction where the focus of innovation is allocated to a field of mutually evolving socio-technical practices that correspond to the aims of the novel paradigm of transformative innovation policy. By adopting an integrated and systems approach as well as tapping into various innovation networks and joining global innovation clusters, firms have better chance of creating an entire new chain of value added products and services. This requires professionals that have certain capabilities and skills such as: foresight for future markets, ability to deal with complex issues, ability to guide responsible R&D, ability of strategic decision making, manage in-depth innovation systems analysis including value chain analysis. Policy makers, on the other hand, need to acknowledge the essential role of firms in the transformative innovation policy paradigm.

Keywords: bioeconomy, innovation and technology management, multifocal innovation, sustainability, transformative innovation policy

Procedia PDF Downloads 121
9243 Sequential Release of Dual Drugs Using Thermo-Sensitive Hydrogel for Tumor Vascular Inhibition and to Enhance the Efficacy of Chemotherapy

Authors: Haile F. Darge, Hsieh C. Tsai

Abstract:

The tumor microenvironment affects the therapeutic outcomes of cancer disease. In a malignant tumor, overexpression of vascular endothelial growth factor (VEGF) provokes the production of pathologic vascular networks. This results in a hostile tumor environment that hinders anti-cancer drug activities and profoundly fuels tumor progression. In this study, we develop a strategy of sequential sustain release of the anti-angiogenic drug: Bevacizumab(BVZ), and anti-cancer drug: Doxorubicin(DOX) which had a synergistic effect on cancer treatment. Poly (D, L-Lactide)- Poly (ethylene glycol) –Poly (D, L-Lactide) (PDLLA-PEG-PDLLA) thermo-sensitive hydrogel was used as a vehicle for local delivery of drugs in a single platform. The in vitro release profiles of the drugs were investigated and confirmed a relatively rapid release of BVZ (73.56 ± 1.39%) followed by Dox (61.21 ± 0.62%) for a prolonged period. The cytotoxicity test revealed that the copolymer exhibited negligible cytotoxicity up to 2.5 mg ml-1 concentration on HaCaT and HeLa cells. The in vivo study on Hela xenograft nude mice verified that hydrogel co-loaded with BVZ and DOX displayed the highest tumor suppression efficacy for up to 36 days with pronounce anti-angiogenic effect of BVZ and with no noticeable damage on vital organs. Therefore, localized co-delivery of anti-angiogenic drug and anti-cancer drugs by the hydrogel system may be a promising approach for enhanced chemotherapeutic efficacy in cancer treatment.

Keywords: anti-angiogenesis, chemotherapy, controlled release, thermo-sensitive hydrogel

Procedia PDF Downloads 128
9242 Optimization of Waste Plastic to Fuel Oil Plants' Deployment Using Mixed Integer Programming

Authors: David Muyise

Abstract:

Mixed Integer Programming (MIP) is an approach that involves the optimization of a range of decision variables in order to minimize or maximize a particular objective function. The main objective of this study was to apply the MIP approach to optimize the deployment of waste plastic to fuel oil processing plants in Uganda. The processing plants are meant to reduce plastic pollution by pyrolyzing the waste plastic into a cleaner fuel that can be used to power diesel/paraffin engines, so as (1) to reduce the negative environmental impacts associated with plastic pollution and also (2) to curb down the energy gap by utilizing the fuel oil. A programming model was established and tested in two case study applications that are, small-scale applications in rural towns and large-scale deployment across major cities in the country. In order to design the supply chain, optimal decisions on the types of waste plastic to be processed, size, location and number of plants, and downstream fuel applications were concurrently made based on the payback period, investor requirements for capital cost and production cost of fuel and electricity. The model comprises qualitative data gathered from waste plastic pickers at landfills and potential investors, and quantitative data obtained from primary research. It was found out from the study that a distributed system is suitable for small rural towns, whereas a decentralized system is only suitable for big cities. Small towns of Kalagi, Mukono, Ishaka, and Jinja were found to be the ideal locations for the deployment of distributed processing systems, whereas Kampala, Mbarara, and Gulu cities were found to be the ideal locations initially utilize the decentralized pyrolysis technology system. We conclude that the model findings will be most important to investors, engineers, plant developers, and municipalities interested in waste plastic to fuel processing in Uganda and elsewhere in developing economy.

Keywords: mixed integer programming, fuel oil plants, optimisation of waste plastics, plastic pollution, pyrolyzing

Procedia PDF Downloads 124
9241 Meeting India's Energy Demand: U.S.-India Energy Cooperation under Trump

Authors: Merieleen Engtipi

Abstract:

India's total share of global population is nearly 18%; however, its per capita energy consumption is only one-third of global average. The demand and supply of electricity are uneven in the country; around 240 million of the population have no access to electricity. However, with India's trajectory for modernisation and economic growth, the demand for energy is only expected to increase. India is at a crossroad, on the one hand facing the increasing demand for energy and on the other hand meeting the Paris climate policy commitments, and further the struggle to provide efficient energy. This paper analyses the policies to meet India’s need for energy, as the per capita energy consumption is likely to be double in 6-7 years period. Simultaneously, India's Paris commitment requires curbing of carbon emission from fossil fuels. There is an increasing need for renewables to be cheaply and efficiently available in the market and for clean technology to extract fossil fuels to meet climate policy goals. Fossil fuels are the most significant generator of energy in India; with the Paris agreement, the demand for clean energy technology is increasing. Finally, the U.S. decided to withdraw from the Paris Agreement; however, the two countries plan to continue engaging bilaterally on energy issues. The U.S. energy cooperation under Trump administration is significantly vital for greater energy security, transfer of technology and efficiency in energy supply and demand.

Keywords: energy demand, energy cooperation, fossil fuels, technology transfer

Procedia PDF Downloads 249
9240 Renewable Energy Integration in Cities of Developing Countries: The Case Study of Tema City, Ghana

Authors: Marriette Sakah, Christoph Kuhn, Samuel Gyamfi

Abstract:

Global electricity demand of households in 2005 is estimated to double by 2025 and nearly double again in 2030. The residential sector promises considerable demand growth through infrastructural and equipment investments, the majority of which is projected to occur in developing countries. This lays bare the urgency for enhanced efficiency in all energy systems combined with exploitation of local potential for renewable energy systems. This study explores options for reducing energy consumption, particularly in residential buildings and providing robust, decentralized and renewable energy supply for African cities. The potential of energy efficiency measures and the potential of harnessing local resources for renewable energy supply are quantitatively assessed. The scale of research specifically addresses the city level, which is regulated by local authorities. Local authorities can actively promote the transition to a renewable-based energy supply system by promoting energy efficiency and the use of alternative renewable fuels in existing buildings, and particularly in planning and development of new settlement areas through the use of incentives, regulations, and demonstration projects. They can also support a more sustainable development by shaping local land use and development patterns in such ways that reduce per capita energy consumption and are benign to the environment. The subject of the current case study, Tema, is Ghana´s main industrial hub, a port city and home to 77,000 families. Residential buildings in Tema consumed 112 GWh of electricity in 2013 or 1.45 MWh per household. If average household electricity demand were to decline at an annual rate of just 2 %, by 2035 Tema would consume only 134 GWh of electricity despite an expected increase in the number of households by 84 %. The work is based on a ground survey of the city’s residential sector. The results show that efficient technologies and decentralized renewable energy systems have great potential for meeting the rapidly growing energy demand of cities in developing countries.

Keywords: energy efficiency, energy saving potential, renewable energy integration, residential buildings, urban Africa

Procedia PDF Downloads 282
9239 Problem Gambling in the Conceptualization of Health Professionals: A Qualitative Analysis of the Discourses Produced by Psychologists, Psychiatrists and General Practitioners

Authors: T. Marinaci, C. Venuleo

Abstract:

Different conceptualizations of disease affect patient care. This study aims to address this gap. It explores how health professionals conceptualize gambling problem, addiction and the goals of recovery process. In-depth, semi-structured, open-ended interviews were conducted with Italian psychologists, psychiatrists, general practitioners, and support staff (N= 114), working within health centres for the treatment of addiction (public health services or therapeutic communities) or medical offices. A Lexical Correspondence Analysis (LCA) was applied to the verbatim transcripts. LCA allowed to identify two main factorial dimensions, which organize similarity and dissimilarity in the discourses of the interviewed. The first dimension labelled 'Models of relationship with the problem', concerns two different models of relationship with the health problem: one related to the request for help and the process of taking charge and the other related to the identification of the psychopathology underlying the disorder. The second dimension, labelled 'Organisers of the intervention' reflects the dialectic between two ways to address the problem. On the one hand, they are the gambling dynamics and its immediate life-consequences to organize the intervention (whatever the request of the user is); on the other hand, they are the procedures and the tools which characterize the health service to organize the way the professionals deal with the user’ s problem (whatever it is and despite the specify of the user’s request). The results highlight how, despite the differences, the respondents share a central assumption: understanding gambling problem implies the reference to the gambler’s identity, more than, for instance, to the relational, social, cultural or political context where the gambler lives. A passive stance is attributed to the user, who does not play any role in the definition of the goal of the intervention. The results will be discussed to highlight the relationship between professional models and users’ ways to understand and deal with the problems related to gambling.

Keywords: cultural models, health professionals, intervention models, problem gambling

Procedia PDF Downloads 151
9238 Probing Syntax Information in Word Representations with Deep Metric Learning

Authors: Bowen Ding, Yihao Kuang

Abstract:

In recent years, with the development of large-scale pre-trained lan-guage models, building vector representations of text through deep neural network models has become a standard practice for natural language processing tasks. From the performance on downstream tasks, we can know that the text representation constructed by these models contains linguistic information, but its encoding mode and extent are unclear. In this work, a structural probe is proposed to detect whether the vector representation produced by a deep neural network is embedded with a syntax tree. The probe is trained with the deep metric learning method, so that the distance between word vectors in the metric space it defines encodes the distance of words on the syntax tree, and the norm of word vectors encodes the depth of words on the syntax tree. The experiment results on ELMo and BERT show that the syntax tree is encoded in their parameters and the word representations they produce.

Keywords: deep metric learning, syntax tree probing, natural language processing, word representations

Procedia PDF Downloads 61
9237 Prediction of Bodyweight of Cattle by Artificial Neural Networks Using Digital Images

Authors: Yalçın Bozkurt

Abstract:

Prediction models were developed for accurate prediction of bodyweight (BW) by using Digital Images of beef cattle body dimensions by Artificial Neural Networks (ANN). For this purpose, the animal data were collected at a private slaughter house and the digital images and the weights of each live animal were taken just before they were slaughtered and the body dimensions such as digital wither height (DJWH), digital body length (DJBL), digital body depth (DJBD), digital hip width (DJHW), digital hip height (DJHH) and digital pin bone length (DJPL) were determined from the images, using the data with 1069 observations for each traits. Then, prediction models were developed by ANN. Digital body measurements were analysed by ANN for body prediction and R2 values of DJBL, DJWH, DJHW, DJBD, DJHH and DJPL were approximately 94.32, 91.31, 80.70, 83.61, 89.45 and 70.56 % respectively. It can be concluded that in management situations where BW cannot be measured it can be predicted accurately by measuring DJBL and DJWH alone or both DJBD and even DJHH and different models may be needed to predict BW in different feeding and environmental conditions and breeds

Keywords: artificial neural networks, bodyweight, cattle, digital body measurements

Procedia PDF Downloads 364
9236 Control of Doxorubicin Release Rate from Magnetic PLGA Nanoparticles Using a Non-Permanent Magnetic Field

Authors: Inês N. Peça , A. Bicho, Rui Gardner, M. Margarida Cardoso

Abstract:

Inorganic/organic nanocomplexes offer tremendous scope for future biomedical applications, including imaging, disease diagnosis and drug delivery. The combination of Fe3O4 with biocompatible polymers to produce smart drug delivery systems for use in pharmaceutical formulation present a powerful tool to target anti-cancer drugs to specific tumor sites through the application of an external magnetic field. In the present study, we focused on the evaluation of the effect of the magnetic field application time on the rate of drug release from iron oxide polymeric nanoparticles. Doxorubicin, an anticancer drug, was selected as the model drug loaded into the nanoparticles. Nanoparticles composed of poly(d-lactide-co-glycolide (PLGA), a biocompatible polymer already approved by FDA, containing iron oxide nanoparticles (MNP) for magnetic targeting and doxorubicin (DOX) were synthesized by the o/w solvent extraction/evaporation method and characterized by scanning electron microscopy (SEM), by dynamic light scattering (DLS), by inductively coupled plasma-atomic emission spectrometry and by Fourier transformed infrared spectroscopy. The produced particles yielded smooth surfaces and spherical shapes exhibiting a size between 400 and 600 nm. The effect of the magnetic doxorubicin loaded PLGA nanoparticles produced on cell viability was investigated in mammalian CHO cell cultures. The results showed that unloaded magnetic PLGA nanoparticles were nontoxic while the magnetic particles without polymeric coating show a high level of toxicity. Concerning the therapeutic activity doxorubicin loaded magnetic particles cause a remarkable enhancement of the cell inhibition rates compared to their non-magnetic counterpart. In vitro drug release studies performed under a non-permanent magnetic field show that the application time and the on/off cycle duration have a great influence with respect to the final amount and to the rate of drug release. In order to determine the mechanism of drug release, the data obtained from the release curves were fitted to the semi-empirical equation of the the Korsmeyer-Peppas model that may be used to describe the Fickian and non-Fickian release behaviour. Doxorubicin release mechanism has shown to be governed mainly by Fickian diffusion. The results obtained show that the rate of drug release from the produced magnetic nanoparticles can be modulated through the magnetic field time application.

Keywords: drug delivery, magnetic nanoparticles, PLGA nanoparticles, controlled release rate

Procedia PDF Downloads 256
9235 Forecasting Equity Premium Out-of-Sample with Sophisticated Regression Training Techniques

Authors: Jonathan Iworiso

Abstract:

Forecasting the equity premium out-of-sample is a major concern to researchers in finance and emerging markets. The quest for a superior model that can forecast the equity premium with significant economic gains has resulted in several controversies on the choice of variables and suitable techniques among scholars. This research focuses mainly on the application of Regression Training (RT) techniques to forecast monthly equity premium out-of-sample recursively with an expanding window method. A broad category of sophisticated regression models involving model complexity was employed. The RT models include Ridge, Forward-Backward (FOBA) Ridge, Least Absolute Shrinkage and Selection Operator (LASSO), Relaxed LASSO, Elastic Net, and Least Angle Regression were trained and used to forecast the equity premium out-of-sample. In this study, the empirical investigation of the RT models demonstrates significant evidence of equity premium predictability both statistically and economically relative to the benchmark historical average, delivering significant utility gains. They seek to provide meaningful economic information on mean-variance portfolio investment for investors who are timing the market to earn future gains at minimal risk. Thus, the forecasting models appeared to guarantee an investor in a market setting who optimally reallocates a monthly portfolio between equities and risk-free treasury bills using equity premium forecasts at minimal risk.

Keywords: regression training, out-of-sample forecasts, expanding window, statistical predictability, economic significance, utility gains

Procedia PDF Downloads 100