Search results for: sensory processing sensitivity (SPS)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6000

Search results for: sensory processing sensitivity (SPS)

960 Hybrid Energy System for the German Mining Industry: An Optimized Model

Authors: Kateryna Zharan, Jan C. Bongaerts

Abstract:

In recent years, economic attractiveness of renewable energy (RE) for the mining industry, especially for off-grid mines, and a negative environmental impact of fossil energy are stimulating to use RE for mining needs. Being that remote area mines have higher energy expenses than mines connected to a grid, integration of RE may give a mine economic benefits. Regarding the literature review, there is a lack of business models for adopting of RE at mine. The main aim of this paper is to develop an optimized model of RE integration into the German mining industry (GMI). Hereby, the GMI with amount of around 800 mill. t. annually extracted resources is included in the list of the 15 major mining country in the world. Accordingly, the mining potential of Germany is evaluated in this paper as a perspective market for RE implementation. The GMI has been classified in order to find out the location of resources, quantity and types of the mines, amount of extracted resources, and access of the mines to the energy resources. Additionally, weather conditions have been analyzed in order to figure out where wind and solar generation technologies can be integrated into a mine with the highest efficiency. Despite the fact that the electricity demand of the GMI is almost completely covered by a grid connection, the hybrid energy system (HES) based on a mix of RE and fossil energy is developed due to show environmental and economic benefits. The HES for the GMI consolidates a combination of wind turbine, solar PV, battery and diesel generation. The model has been calculated using the HOMER software. Furthermore, the demonstrated HES contains a forecasting model that predicts solar and wind generation in advance. The main result from the HES such as CO2 emission reduction is estimated in order to make the mining processing more environmental friendly.

Keywords: diesel generation, German mining industry, hybrid energy system, hybrid optimization model for electric renewables, optimized model, renewable energy

Procedia PDF Downloads 344
959 Inflammatory and Cardio Hypertrophic Remodeling Biomarkers in Patients with Fabry Disease

Authors: Margarita Ivanova, Julia Dao, Andrew Friedman, Neil Kasaci, Rekha Gopal, Ozlem Goker-Alpan

Abstract:

In Fabry disease (FD), α-galactosidase A (α-Gal A) deficiency leads to the accumulation of globotriaosylceramide (Lyso-Gb3 and Gb3), triggering a pathologic cascade that causes the severity of organs damage. The heart is one of the several organs with high sensitivity to the α-Gal A deficiency. A subgroup of patients with significant residual of α-Gal A activity with primary cardiac involvement is occasionally referred to as “cardiac variant.” The cardiovascular complications are most frequently encountered, contributing substantially to morbidity, and are the leading cause of premature death in male and female patients with FD. The deposition of Lyso-Gb-3 and Gb-3 within the myocardium affects cardiac function with resultant progressive cardiovascular pathology. Gb-3 and Lyso-Gb-3 accumulation at the cellular level trigger a cascade of events leading to end-stage fibrosis. In the cardiac tissue, Lyso-Gb-3 deposition is associated with the increased release of inflammatory factors and transforming growth factors. Infiltration of lymphocytes and macrophages into endomyocardial tissue indicates that inflammation plays a significant role in cardiac damage. Moreover, accumulated data suggest that chronic inflammation leads to multisystemic FD pathology even under enzyme replacement therapy (ERT). NF-κB activation plays a subsequent role in the inflammatory response to cardiac dysfunction and advanced heart failure in the general population. TNFalpha/NF-κB signaling protects the myocardial evoking by ischemic preconditioning; however, this protective effect depends on the concentration of TNF-α. Thus, we hypothesize that TNF-α is a critical factor in determining the grade of cardio-pathology. Cardiac hypertrophy corresponds to the expansion of the coronary vasculature to maintain a sufficient supply of nutrients and oxygen. Coronary activation of angiogenesis and fibrosis plays a vital role in cardiac vascularization, hypertrophy, and tissue remodeling. We suggest that the interaction between the inflammatory pathways and cardiac vascularization is a bi-directional process controlled by secreted cytokines and growth factors. The co-coordination of these two processes has never been explored in FD. In a cohort of 40 patients with FD, biomarkers associated with inflammation and cardio hypertrophic remodeling were studied. FD patients were categorized into three groups based on LVmass/DSA, LVEF, and ECG abnormalities: FD with no cardio complication, FD with moderate cardio complication, and severe cardio complication. Serum levels of NF-kB, TNFalpha, Il-6, Il-2, MCP1, ING-gamma, VEGF, IGF-1, TGFβ, and FGF2 were quantified by enzyme-linked immunosorbent assays (ELISA). Among the biomarkers, MCP-1, INF-gamma, VEGF, TNF-alpha, and TGF-beta were elevated in FD patients. Some of these biomarkers also have the potential to correlate with cardio pathology in FD. Conclusion: The study provides information about the role of inflammatory pathways and biomarkers of cardio hypertrophic remodeling in FD patients. This study will also reveal the mechanisms that link intracellular accumulation of Lyso-GB-3 and Gb3 to the development of cardiomyopathy with myocardial thickening and resultant fibrosis.

Keywords: biomarkers, Fabry disease, inflammation, growth factors

Procedia PDF Downloads 82
958 Quantitative Evaluation of Mitral Regurgitation by Using Color Doppler Ultrasound

Authors: Shang-Yu Chiang, Yu-Shan Tsai, Shih-Hsien Sung, Chung-Ming Lo

Abstract:

Mitral regurgitation (MR) is a heart disorder which the mitral valve does not close properly when the heart pumps out blood. MR is the most common form of valvular heart disease in the adult population. The diagnostic echocardiographic finding of MR is straightforward due to the well-known clinical evidence. In the determination of MR severity, quantification of sonographic findings would be useful for clinical decision making. Clinically, the vena contracta is a standard for MR evaluation. Vena contracta is the point in a blood stream where the diameter of the stream is the least, and the velocity is the maximum. The quantification of vena contracta, i.e. the vena contracta width (VCW) at mitral valve, can be a numeric measurement for severity assessment. However, manually delineating the VCW may not accurate enough. The result highly depends on the operator experience. Therefore, this study proposed an automatic method to quantify VCW to evaluate MR severity. Based on color Doppler ultrasound, VCW can be observed from the blood flows to the probe as the appearance of red or yellow area. The corresponding brightness represents the value of the flow rate. In the experiment, colors were firstly transformed into HSV (hue, saturation and value) to be closely align with the way human vision perceives red and yellow. Using ellipse to fit the high flow rate area in left atrium, the angle between the mitral valve and the ultrasound probe was calculated to get the vertical shortest diameter as the VCW. Taking the manual measurement as the standard, the method achieved only 0.02 (0.38 vs. 0.36) to 0.03 (0.42 vs. 0.45) cm differences. The result showed that the proposed automatic VCW extraction can be efficient and accurate for clinical use. The process also has the potential to reduce intra- or inter-observer variability at measuring subtle distances.

Keywords: mitral regurgitation, vena contracta, color doppler, image processing

Procedia PDF Downloads 370
957 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 316
956 Water Management Scheme: Panacea to Development Using Nigeria’s University of Ibadan Water Supply Scheme as a Case Study

Authors: Sunday Olufemi Adesogan

Abstract:

The supply of potable water at least is a very important index in national development. Water tariffs depend on the treatment cost which carries the highest percentage of the total operation cost in any water supply scheme. In order to keep water tariffs as low as possible, treatment costs have to be minimized. The University of Ibadan, Nigeria, water supply scheme consists of a treatment plant with three distribution stations (Amina way, Kurumi and Lander) and two raw water supply sources (Awba dam and Eleyele dam). An operational study of the scheme was carried out to ascertain the efficiency of the supply of potable water on the campus to justify the need for water supply schemes in tertiary institutions. The study involved regular collection, processing and analysis of periodic operational data. Data collected include supply reading (water production on daily basis) and consumers metered reading for a period of 22 months (October 2013 - July 2015), and also collected, were the operating hours of both plants and human beings. Applying the required mathematical equations, total loss was determined for the distribution system, which was translated into monetary terms. Adequacies of the operational functions were also determined. The study revealed that water supply scheme is justified in tertiary institutions. It was also found that approximately 10.7 million Nigerian naira (N) is lost to leakages during the 22-month study period; the system’s storage capacity is no longer adequate, especially for peak water production. The capacity of the system as a whole is insufficient for the present university population and that the existing water supply system is not being operated in an optimal manner especially due to personnel, power and system ageing constraints.

Keywords: development, panacea, supply, water

Procedia PDF Downloads 209
955 Simplified INS\GPS Integration Algorithm in Land Vehicle Navigation

Authors: Othman Maklouf, Abdunnaser Tresh

Abstract:

Land vehicle navigation is subject of great interest today. Global Positioning System (GPS) is the main navigation system for positioning in such systems. GPS alone is incapable of providing continuous and reliable positioning, because of its inherent dependency on external electromagnetic signals. Inertial Navigation (INS) is the implementation of inertial sensors to determine the position and orientation of a vehicle. The availability of low-cost Micro-Electro-Mechanical-System (MEMS) inertial sensors is now making it feasible to develop INS using an inertial measurement unit (IMU). INS has unbounded error growth since the error accumulates at each step. Usually, GPS and INS are integrated with a loosely coupled scheme. With the development of low-cost, MEMS inertial sensors and GPS technology, integrated INS/GPS systems are beginning to meet the growing demands of lower cost, smaller size, and seamless navigation solutions for land vehicles. Although MEMS inertial sensors are very inexpensive compared to conventional sensors, their cost (especially MEMS gyros) is still not acceptable for many low-end civilian applications (for example, commercial car navigation or personal location systems). An efficient way to reduce the expense of these systems is to reduce the number of gyros and accelerometers, therefore, to use a partial IMU (ParIMU) configuration. For land vehicular use, the most important gyroscope is the vertical gyro that senses the heading of the vehicle and two horizontal accelerometers for determining the velocity of the vehicle. This paper presents a field experiment for a low-cost strap down (ParIMU)\GPS combination, with data post processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach, we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost IMU (Inertial Measurement Unit) and because of the relatively small area of the trajectory.

Keywords: GPS, IMU, Kalman filter, materials engineering

Procedia PDF Downloads 422
954 Numerical Investigation of Turbulent Inflow Strategy in Wind Energy Applications

Authors: Arijit Saha, Hassan Kassem, Leo Hoening

Abstract:

Ongoing climate change demands the increasing use of renewable energies. Wind energy plays an important role in this context since it can be applied almost everywhere in the world. To reduce the costs of wind turbines and to make them more competitive, simulations are very important since experiments are often too costly if at all possible. The wind turbine on a vast open area experiences the turbulence generated due to the atmosphere, so it was of utmost interest from this research point of view to generate the turbulence through various Inlet Turbulence Generation methods like Precursor cyclic and Kaimal Spectrum Exponential Coherence (KSEC) in the computational simulation domain. To be able to validate computational fluid dynamic simulations of wind turbines with the experimental data, it is crucial to set up the conditions in the simulation as close to reality as possible. This present work, therefore, aims at investigating the turbulent inflow strategy and boundary conditions of KSEC and providing a comparative analysis alongside the Precursor cyclic method for Large Eddy Simulation within the context of wind energy applications. For the generation of the turbulent box through KSEC method, firstly, the constrained data were collected from an auxiliary channel flow, and later processing was performed with the open-source tool PyconTurb, whereas for the precursor cyclic, only the data from the auxiliary channel were sufficient. The functionality of these methods was studied through various statistical properties such as variance, turbulent intensity, etc with respect to different Bulk Reynolds numbers, and a conclusion was drawn on the feasibility of KSEC method. Furthermore, it was found necessary to verify the obtained data with DNS case setup for its applicability to use it as a real field CFD simulation.

Keywords: Inlet Turbulence Generation, CFD, precursor cyclic, KSEC, large Eddy simulation, PyconTurb

Procedia PDF Downloads 96
953 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling

Authors: Justyna P. Majewska, Szymon M. Truskolaski

Abstract:

The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.

Keywords: agent-based modeling, digitalized services, e-sport, spectators motives

Procedia PDF Downloads 172
952 Structural Protein-Protein Interactions Network of Breast Cancer Lung and Brain Metastasis Corroborates Conformational Changes of Proteins Lead to Different Signaling

Authors: Farideh Halakou, Emel Sen, Attila Gursoy, Ozlem Keskin

Abstract:

Protein–Protein Interactions (PPIs) mediate major biological processes in living cells. The study of PPIs as networks and analyze the network properties contribute to the identification of genes and proteins associated with diseases. In this study, we have created the sub-networks of brain and lung metastasis from primary tumor in breast cancer. To do so, we used seed genes known to cause metastasis, and produced their interactions through a network-topology based prioritization method named GUILDify. In order to have the experimental support for the sub-networks, we further curated them using STRING database. We proceeded by modeling structures for the interactions lacking complex forms in Protein Data Bank (PDB). The functional enrichment analysis shows that KEGG pathways associated with the immune system and infectious diseases, particularly the chemokine signaling pathway, are important for lung metastasis. On the other hand, pathways related to genetic information processing are more involved in brain metastasis. The structural analyses of the sub-networks vividly demonstrated their difference in terms of using specific interfaces in lung and brain metastasis. Furthermore, the topological analysis identified genes such as RPL5, MMP2, CCR5 and DPP4, which are already known to be associated with lung or brain metastasis. Additionally, we found 6 and 9 putative genes that are specific for lung and brain metastasis, respectively. Our analysis suggests that variations in genes and pathways contributing to these different breast metastasis types may arise due to change in tissue microenvironment. To show the benefits of using structural PPI networks instead of traditional node and edge presentation, we inspect two case studies showing the mutual exclusiveness of interactions and effects of mutations on protein conformation which lead to different signaling.

Keywords: breast cancer, metastasis, PPI networks, protein conformational changes

Procedia PDF Downloads 244
951 Experimental and Modal Determination of the State-Space Model Parameters of a Uni-Axial Shaker System for Virtual Vibration Testing

Authors: Jonathan Martino, Kristof Harri

Abstract:

In some cases, the increase in computing resources makes simulation methods more affordable. The increase in processing speed also allows real time analysis or even more rapid tests analysis offering a real tool for test prediction and design process optimization. Vibration tests are no exception to this trend. The so called ‘Virtual Vibration Testing’ offers solution among others to study the influence of specific loads, to better anticipate the boundary conditions between the exciter and the structure under test, to study the influence of small changes in the structure under test, etc. This article will first present a virtual vibration test modeling with a main focus on the shaker model and will afterwards present the experimental parameters determination. The classical way of modeling a shaker is to consider the shaker as a simple mechanical structure augmented by an electrical circuit that makes the shaker move. The shaker is modeled as a two or three degrees of freedom lumped parameters model while the electrical circuit takes the coil impedance and the dynamic back-electromagnetic force into account. The establishment of the equations of this model, describing the dynamics of the shaker, is presented in this article and is strongly related to the internal physical quantities of the shaker. Those quantities will be reduced into global parameters which will be estimated through experiments. Different experiments will be carried out in order to design an easy and practical method for the identification of the shaker parameters leading to a fully functional shaker model. An experimental modal analysis will also be carried out to extract the modal parameters of the shaker and to combine them with the electrical measurements. Finally, this article will conclude with an experimental validation of the model.

Keywords: lumped parameters model, shaker modeling, shaker parameters, state-space, virtual vibration

Procedia PDF Downloads 270
950 A Sustainable Approach for Waste Management: Automotive Waste Transformation into High Value Titanium Nitride Ceramic

Authors: Mohannad Mayyas, Farshid Pahlevani, Veena Sahajwalla

Abstract:

Automotive shredder residue (ASR) is an industrial waste, generated during the recycling process of End-of-life vehicles. The large increasing production volumes of ASR and its hazardous content have raised concerns worldwide, leading some countries to impose more restrictions on ASR waste disposal and encouraging researchers to find efficient solutions for ASR processing. Although a great deal of research work has been carried out, all proposed solutions, to our knowledge, remain commercially and technically unproven. While the volume of waste materials continues to increase, the production of materials from new sustainable sources has become of great importance. Advanced ceramic materials such as nitrides, carbides and borides are widely used in a variety of applications. Among these ceramics, a great deal of attention has been recently paid to Titanium nitride (TiN) owing to its unique characteristics. In our study, we propose a new sustainable approach for ASR management where TiN nanoparticles with ideal particle size ranging from 200 to 315 nm can be synthesized as a by-product. In this approach, TiN is thermally synthesized by nitriding pressed mixture of automotive shredder residue (ASR) incorporated with titanium oxide (TiO2). Results indicated that TiO2 influences and catalyses degradation reactions of ASR and helps to achieve fast and full decomposition. In addition, the process resulted in titanium nitride (TiN) ceramic with several unique structures (porous nanostructured, polycrystalline, micro-spherical and nano-sized structures) that were simply obtained by tuning the ratio of TiO2 to ASR, and a product with appreciable TiN content of around 85% was achieved after only one hour nitridation at 1550 °C.

Keywords: automotive shredder residue, nano-ceramics, waste treatment, titanium nitride, thermal conversion

Procedia PDF Downloads 295
949 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection

Authors: Yulan Wu

Abstract:

With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 97
948 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: statistical slope stability analysis, skew distributions, probability of failure, functions of random variables

Procedia PDF Downloads 338
947 Biosynthesis of Silver Nanoparticles Using Zataria multiflora Extract, and Study of Antibacterial Effects on UTI Bacteria (MDR)

Authors: Mohammad Hossein Pazandeh, Monir Doudi, Sona Rostampour Yasouri

Abstract:

Irregular consumption of current antibiotic makes increases of antibiotic resistance between urin pathogens on all worlds. This study selected based on this great community problem. The aim of this study was the biosynthesis of silver nanoparticles from Zataria multiflora extract and then to investigate its antibacterial effect on gram-negative bacilli common in Urinary Tract Infections (UTI) and MDR. The plant used in the present research was Zataria multiflora whose extract was prepared through Soxhlet extraction method. Green synthesis condition of silver nanoparticles was investigated in terms of three parameters including the extract amount, concentration of silver nitrate salt, and temperature. The seizes of nanoparticles were determined by Zetasizer. In order to identify synthesized silver nanoparticles Transmission Electron Microscopy (TEM) and X-ray Diffraction (XRD) methods were used. For evaluating the antibacterial effects of nanoparticles synthesized through biological method different concentrations of silver nanoparticles were studied on 140 cases of Muliple Drug Resistance (MDR) bacteria strains Escherichia coli, Klebsiella pneumoniae, Enterobacter aerogenes, Proteus vulgaris,Citrobacter freundii, Acinetobacter bumanii and Pseudomonas aeruginosa, (each genus of bacteria, 20 samples), which all were MDR and cause urinary tract infections , for identification of bacteria were used of Polymerase Chain Reaction (PCR) test and laboratory methods (Agar well diffusion and Microdilution methods) to assess their sensitivity to Nanoparticles. The data were analyzed using SPSS software by nonparametric Kruskal-Wallis and Mann-Whitney tests. Significant results were found about the effects of silver nitrate concentration, different amounts of Zataria multiflora extract, and temperature on nanoparticles; that is, by increasing the concentration of silver nitrate, extract amount, and temperature, the sizes of synthesized nanoparticles declined. However, the effect of above mentioned factors on particles diffusion index was not significant. Based on the TEM results, particles were mainly spherical shape with a diameter range of 25 to 50 nm. The results of XRD Analysis indicated the formation of Nanostructures and Nanocrystals of silver.. The obtained results of antibacterial effects of different concentrations of silver nanoparticles on according to agar well diffusion and microdilution method, biologically synthesized nanoparticles showed 1000 mg /ml highest and lowest mean inhibition zone diameter in E.coli , Acinetobacter bumanii 23 and 15mm, respectively. MIC was observed for all of bacteria 125mg/ml and for Acinetobacter bumanii 250mg/ml.Comparing the growth inhibitory effect of chemically synthesized Nanoparticles and biologically synthesized Nanoparticles showed that in the chemical method the highest growth inhibition belonged to the concentration of 62.5 mg /ml. The inhibitory effect on the growth all of bacteria causes of urine infection and MDR was observed and by increasing silver ion concentration in Nanoparticles, antibacterial activity increased. Generally, the biological synthesis can be considered an efficient way not only in making Nanoparticles but also for having anti-bacterial properties. It is more biocompatible and may be possess less toxicity than the Nanoparticles synthesized chemically.

Keywords: biosynthesis, MDR bacteria, silver nanoparticles, UTI

Procedia PDF Downloads 52
946 Chronolgy and Developments in Inventory Control Best Practices for FMCG Sector

Authors: Roopa Singh, Anurag Singh, Ajay

Abstract:

Agriculture contributes a major share in the national economy of India. A major portion of Indian economy (about 70%) depends upon agriculture as it forms the main source of income. About 43% of India’s geographical area is used for agricultural activity which involves 65-75% of total population of India. The given work deals with the Fast moving Consumer Goods (FMCG) industries and their inventories which use agricultural produce as their raw material or input for their final product. Since the beginning of inventory practices, many developments took place which can be categorised into three phases, based on the review of various works. The first phase is related with development and utilization of Economic Order Quantity (EOQ) model and methods for optimizing costs and profits. Second phase deals with inventory optimization method, with the purpose of balancing capital investment constraints and service level goals. The third and recent phase has merged inventory control with electrical control theory. Maintenance of inventory is considered negative, as a large amount of capital is blocked especially in mechanical and electrical industries. But the case is different in food processing and agro-based industries and their inventories due to cyclic variation in the cost of raw materials of such industries which is the reason for selection of these industries in the mentioned work. The application of electrical control theory in inventory control makes the decision-making highly instantaneous for FMCG industries without loss in their proposed profits, which happened earlier during first and second phases, mainly due to late implementation of decision. The work also replaces various inventories and work-in-progress (WIP) related errors with their monetary values, so that the decision-making is fully target-oriented.

Keywords: control theory, inventory control, manufacturing sector, EOQ, feedback, FMCG sector

Procedia PDF Downloads 353
945 Legal Pluralism and Ideology: The Recognition of the Indigenous Justice Administration in Bolivia through the "Indigenismo" and "Decolonisation" Discourses

Authors: Adriana Pereira Arteaga

Abstract:

In many Latin American countries the transition towards legal pluralism - has developed as part of what is called Latin-American-Constitutionalism over the last thirty years. The aim of this paper is to discuss how legal pluralism in its current form in Bolivia may produce exclusion and violence. Legal sources and discourse analysis - as an approach to examine written language on discourse documentation- will be used to develop this paper. With the constitution of 2009, Bolivia was symbolically "re-founded" into a multi-nation state. This shift goes hand in hand with the "indigenista" and "decolonisation" ideologies developing since the early 20th century. Discourses based on these ideologies reflect the rejection of liberal and western premises on which the Bolivian republic was originally built after independence. According to the "indigenista" movements, the liberal nation-state generates institutions corresponding to a homogenous society. These liberal institutions not only ignore the Bolivian multi-nation reality, but also maintain the social structures originating form the colony times, based on prejudices against the indigenous. The described statements were elaborated through the image: the indigenous people humiliated by a cruel western system as highlighted by the constitution's preamble. This narrative had a considerable impact on the sensitivity of people and received great social support. Therefore the proposal for changing structures of the nation-state, is charged with an emancipatory message of restoring even the pre-Columbian order. An order at times romantically described as the perfect order. Legally this connotes a rejection of the positivistic national legal system based on individual rights and the promotion of constitutional recognition of indigenous justice administration. The pluralistic Constitution is supposed to promote tolerance and a peaceful coexistence among nations, so that the unity and integrity of the country could be maintained. In its current form, legal pluralism in Bolivia is justified on pre-existing rights contained for example in the International - Labour - Organization - Convention 169, but it is more developed on the described discursive constructions. Over time these discursive constructions created inconsistencies in terms of putting indigenous justice administration into practice: First, because legal pluralism has been more developed on level of political discourse, so a real interaction between the national and the indigenous jurisdiction cannot be observed. There are no clear coordination and cooperation mechanisms. Second, since the recently reformed constitution is based on deep sensitive experiences, little is said about the general legal principles on which a pluralistic administration of justice in Bolivia should be based. Third, basic rights, liberties, and constitutional guarantees are also affected by the antagonized image of the national justice administration. As a result, fundamental rights could be violated on a large scale because many indigenous justice administration practices run counter to these constitutional rules. These problems are not merely Bolivian but may also be encountered in other regional countries with similar backgrounds, like Ecuador.

Keywords: discourse, indigenous justice, legal pluralism, multi-nation

Procedia PDF Downloads 447
944 Performance of a Lytic Bacteriophage Cocktail against Pseudomonas aeruginosa in Conditions That Simulate the Cystic Fibrosis Lung Environment

Authors: Isaac Martin, Abigail Lark, Sandra Morales, Eric W. Alton, Jane C. Davies

Abstract:

Objectives: The cystic fibrosis (CF) lung is a unique microbiological niche, wherein harmful bacteria persist for many years despite antibiotic therapy. Pseudomonas aeruginosa (Pa), the major culprit leading to lung decline and increased mortality, thrives in the lungs of patients with CF due to several factors that have been linked with poor antibiotic performance. Our group is investigating alternative therapies including bacteriophage cocktails with which we have previously demonstrated efficacy against planktonic organisms. In this study, we explored the effects of a 4-phage cocktail on Pa grown in two different conditions, intended to mirror the CF lung: a) alongside standard antibiotic treatment in pre-formed biofilms (structures formed by Pa-secreted exopolysaccharides which provide both physical and cell division barriers to antimicrobials and host defenses and b) in an acidic environment postulated to be present in the CF airway due both to the primary defect in bicarbonate secretion and secondary effects of inflammation. Methods: 16 Pa strains from CF patients at the Royal Brompton Hospital were selected based on sensitivity to a) ceftazidime/ tobramycin and b) the phage cocktail in a conventional plaque assay. To assess efficacy of phage in biofilms, 96 well plates with Pa (5x10⁷ CFU/ ml) were incubated in static conditions, allowing adherent bacterial colonies to form for 24 hr. Ceftazidime and tobramycin (both at 2 × MIC) were added, +/- bacteriophage (4x10⁸ PFU/mL) for a further 24 hr. Cell viability and biomass were estimated using fluorescent resazurin and crystal violet assays, respectively. To evaluate the effect of pH, strains were grown planktonically in shaking 96 well plates at pH 6.0, 6.6, 7.0 and 7.5 with tobramycin or phage, at varying concentrations. Cell viability was quantified by fluorescent resazurin assay. Results: For the biofilm assay, treatment groups were compared with untreated controls and expressed as percent reduction in cell viability and biomass. Addition of the 4-phage cocktail resulted in a 1.3-fold reduction in cell viability and 1.7-fold reduction in biomass (p < 0.001) when compared to standard antibiotic treatment alone. Notably, there was a 50 ± 15% reduction in cell viability and 60 ± 12% reduction in biomass (95% CI) for the 4 biofilms demonstrating the most resistance to antibiotic treatment. 83% of strains tested (n=6) showed decreased bacterial killing by tobramycin at acidic pHs (p < 0.01). However, 25% of strains (n=12) showed improved phage killing at acidic pHs (p < 0.05), with none showing the pattern of reduced efficacy at acidic pH demonstrated by tobramycin. Conclusion: The 4-phage anti-Pa cocktail tested against Pa performs well in pre-formed biofilms and in acidic environments; two conditions intended to mimic the CF lung. To our knowledge, these are the first data looking at the effects of subtle pH changes on phage-mediated bacterial killing in the context of Pa infection. These findings contribute to a growing body of evidence supporting the use of nebulised lytic bacteriophage as a treatment in the context of lung infection.

Keywords: biofilm, cystic fibrosis, pH, Pseudomonas aeruginosa, lytic bacteriophage

Procedia PDF Downloads 173
943 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 136
942 Nutritional Advantages of Millet (Panucum Miliaceum L) and Opportunities for Its Processing as Value Added Foods

Authors: Fatima Majeed Almonajim

Abstract:

Panucum miliaceum L is a plant from the genus Gramineae, In the world, millets are regarded as a significant grain, however, they are very little exploited. Millet grain is abundant in nutrients and health-beneficial phenolic compounds, making it suitable as food and feed. The plant has received considerable attention for its high content of phenolic compounds, low glycemic index, the presence of unsaturated fats and lack of gluten which are beneficial to human health, and thus, have made the plant being effective in treating celiac disease, diabetes, lowering blood lipids (cholesterol) and preventing tumors. Moreover, the plant requires little water to grow, a property that is worth considering. This study provides an overview of the nutritional and health benefits provided by millet types grown in 2 areas Iraq and Iran, aiming to compare the effect of climate on the components of millet. In this research, millet samples collected from the both Babylon (Iraqi) and Isfahan (Iranian) types were extracted and after HPTLC, the resulted pattern of the two samples were compared. As a result, the Iranian millet showed more terpenoid compounds than Iraqi millet, and therefore, Iranian millet has a higher priority than Iraqi millet in increasing the human body's immunity. On the other hand, in view of the number of essential amino acids, the Iraqi millet contains more nutritional value compared to the Iranian millet. Also, due to the higher amount of histidine in the Iranian millet, compiled to the lack of gluten found from previous studies, we came to the conclusion that the addition of millet in the diet of children, more specifically those children with irritable bowel syndrome, can be considered beneficial. Therefore, as a component of dairy products, millet can be used in preparing food for children such as dry milk.

Keywords: HPTLC, phytochemicals, specialty foods, Panucum miliaceum L, nutrition

Procedia PDF Downloads 95
941 Combined Analysis of Land use Change and Natural Flow Path in Flood Analysis

Authors: Nowbuth Manta Devi, Rasmally Mohammed Hussein

Abstract:

Flood is one of the most devastating climate impacts that many countries are facing. Many different causes have been associated with the intensity of floods being recorded over time. Unplanned development, low carrying capacity of drains, clogged drains, construction in flood plains or increasing intensity of rainfall events. While a combination of these causes can certainly aggravate the flood conditions, in many cases, increasing drainage capacity has not reduced flood risk to the level that was expected. The present study analyzed the extent to which land use is contributing to aggravating impacts of flooding in a city. Satellite images have been analyzed over a period of 20 years at intervals of 5 years. Both unsupervised and supervised classification methods have been used with the image processing module of ArcGIS. The unsupervised classification was first compared to the basemap available in ArcGIS to get a first overview of the results. These results also aided in guiding data collection on-site for the supervised classification. The island of Mauritius is small, and there are large variations in land use over small areas, both within the built areas and in agricultural zones involving food crops. Larger plots of agricultural land under sugar cane plantations are relatively more easily identified. However, the growth stage and health of plants vary and this had to be verified during ground truthing. The results show that although there have been changes in land use as expected over a span of 20 years, this was not significant enough to cause a major increase in flood risk levels. A digital elevation model was analyzed for further understanding. It could not be noted that overtime, development tampered with natural flow paths in addition to increasing the impermeable areas. This situation results in backwater flows, hence increasing flood risks.

Keywords: climate change, flood, natural flow paths, small islands

Procedia PDF Downloads 9
940 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria

Authors: Isaac Kayode Ogunlade

Abstract:

Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.

Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device

Procedia PDF Downloads 92
939 Efficacy of Phonological Awareness Intervention for People with Language Impairment

Authors: I. Wardana Ketut, I. Suparwa Nyoman

Abstract:

This study investigated the form and characteristic of speech sound produced by three Balinese subjects who have recovered from aphasia as well as intervened their language impairment on side of linguistic and neuronal aspects of views. The failure of judging the speech sound was caused by impairment of motor cortex that indicated there were lesions in left hemispheric language zone. Sound articulation phenomena were in the forms of phonemes deletion, replacement or assimilation in individual words and meaning building for anomic aphasia. Therefore, the Balinese sound patterns were stimulated by showing pictures to the subjects and recorded to recognize what individual consonants or vowels they unclearly produced and to find out how the sound disorder occurred. The physiology of sound production by subject’s speech organs could not only show the accuracy of articulation but also any level of severity the lesion they suffered from. The subjects’ speech sounds were investigated, classified and analyzed to know how poor the lingual units were and observed to clarify weaknesses of sound characters occurred either for place or manner of articulation. Many fricative and stopped consonants were replaced by glottal or palatal sounds because the cranial nerve, such as facial, trigeminal, and hypoglossal underwent impairment after the stroke. The phonological intervention was applied through a technique called phonemic articulation drill and the examination was conducted to know any change has been obtained. The finding informed that some weak articulation turned into clearer sound and simple meaning of language has been conveyed. The hierarchy of functional parts of brain played important role of language formulation and processing. From this finding, it can be clearly emphasized that this study supports the role of right hemisphere in recovery from aphasia is associated with functional brain reorganization.

Keywords: aphasia, intervention, phonology, stroke

Procedia PDF Downloads 196
938 Removal of Pb²⁺ from Waste Water Using Nano Silica Spheres Synthesized on CaCO₃ as a Template: Equilibrium and Thermodynamic Studies

Authors: Milton Manyangadze, Joseph Govha, T. Bala Narsaiah, Ch. Shilpa Chakra

Abstract:

The availability and access to fresh water is today a serious global challenge. This has been a direct result of factors such as the current rapid industrialization and industrial growth, persistent droughts in some parts of the world, especially in the sub-Saharan Africa as well as population growth. Growth of the chemical processing industry has also seen an increase in the levels of pollutants in our water bodies which include heavy metals among others. Heavy metals are known to be dangerous to both human and aquatic life. As such, they have been linked to several diseases. This is mainly because they are highly toxic. They are also known to be bio accumulative and non-biodegradable. Lead for example, has been linked to a number of health problems which include damage of vital internal body systems like the nervous and reproductive system as well as the kidneys. From this background therefore, the removal of the toxic heavy metal, Pb2+ from waste water was investigated using nano silica hollow spheres (NSHS) as the adsorbent. Synthesis of NSHS was done using a three-stage process in which CaCO3 nanoparticles were initially prepared as a template. This was followed by treatment of the formed oxide particles with NaSiO3 to give a nanocomposite. Finally, the template was destroyed using 2.0M HCl to give NSHS. Characterization of the nanoparticles was done using analytical techniques like XRD, SEM, and TGA. For the adsorption process, both thermodynamic and equilibrium studies were carried out. Thermodynamic studies were carried out and the Gibbs free energy, Enthalpy and Entropy of the adsorption process were determined. The results revealed that the adsorption process was both endothermic and spontaneous. Equilibrium studies were also carried out in which the Langmuir and Freundlich isotherms were tested. The results showed that the Langmuir model best described the adsorption equilibrium.

Keywords: characterization, endothermic, equilibrium studies, Freundlich, Langmuir, nanoparticles, thermodynamic studies

Procedia PDF Downloads 215
937 Towards the Production of Least Contaminant Grade Biosolids and Biochar via Mild Acid Pre-treatment

Authors: Ibrahim Hakeem

Abstract:

Biosolids are stabilised sewage sludge produced from wastewater treatment processes. Biosolids contain valuable plant nutrient which facilitates their beneficial reuse in agricultural land. However, the increasing levels of legacy and emerging contaminants such as heavy metals (HMs), PFAS, microplastics, pharmaceuticals, microbial pathogens etc., are restraining the direct land application of biosolids. Pyrolysis of biosolids can effectively degrade microbial and organic contaminants; however, HMs remain a persistent problem with biosolids and their pyrolysis-derived biochar. In this work, we demonstrated the integrated processing of biosolids involving the acid pre-treatment for HMs removal and selective reduction of ash-forming elements followed by the bench-scale pyrolysis of the treated biosolids to produce quality biochar and bio-oil enriched with valuable platform chemicals. The pre-treatment of biosolids using 3% v/v H₂SO₄ at room conditions for 30 min reduced the ash content from 30 wt% in raw biosolids to 15 wt% in the treated sample while removing about 80% of limiting HMs without degrading the organic matter. The preservation of nutrients and reduction of HMs concentration and mobility via the developed hydrometallurgical process improved the grade of the treated biosolids for beneficial land reuse. The co-removal of ash-forming elements from biosolids positively enhanced the fluidised bed pyrolysis of the acid-treated biosolids at 700 ℃. Organic matter devolatilisation was improved by 40%, and the produced biochar had higher surface area (107 m²/g), heating value (15 MJ/kg), fixed carbon (35 wt%), organic carbon retention (66% dry-ash free) compared to the raw biosolids biochar with surface area (56 m²/g), heating value (9 MJ/kg), fixed carbon (20 wt%) and organic carbon retention (50%). Pre-treatment also improved microporous structure development of the biochar and substantially decreased the HMs concentration and bioavailability by at least 50% relative to the raw biosolids biochar. The integrated process is a viable approach to enhancing value recovery from biosolids.

Keywords: biosolids, pyrolysis, biochar, heavy metals

Procedia PDF Downloads 76
936 Thermo-Mechanical Analysis of Composite Structures Utilizing a Beam Finite Element Based on Global-Local Superposition

Authors: Andre S. de Lima, Alfredo R. de Faria, Jose J. R. Faria

Abstract:

Accurate prediction of thermal stresses is particularly important for laminated composite structures, as large temperature changes may occur during fabrication and field application. The normal transverse deformation plays an important role in the prediction of such stresses, especially for problems involving thick laminated plates subjected to uniform temperature loads. Bearing this in mind, the present study aims to investigate the thermo-mechanical behavior of laminated composite structures using a new beam element based on global-local superposition, accounting for through-the-thickness effects. The element formulation is based on a global-local superposition in the thickness direction, utilizing a cubic global displacement field in combination with a linear layerwise local displacement distribution, which assures zig-zag behavior of the stresses and displacements. By enforcing interlaminar stress (normal and shear) and displacement continuity, as well as free conditions at the upper and lower surfaces, the number of degrees of freedom in the model is maintained independently of the number of layers. Moreover, the proposed formulation allows for the determination of transverse shear and normal stresses directly from the constitutive equations, without the need of post-processing. Numerical results obtained with the beam element were compared to analytical solutions, as well as results obtained with commercial finite elements, rendering satisfactory results for a range of length-to-thickness ratios. The results confirm the need for an element with through-the-thickness capabilities and indicate that the present formulation is a promising alternative to such analysis.

Keywords: composite beam element, global-local superposition, laminated composite structures, thermal stresses

Procedia PDF Downloads 156
935 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods

Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin

Abstract:

Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.

Keywords: Burgers' equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile

Procedia PDF Downloads 169
934 Effects of Non-Diagnostic Haptic Information on Consumers' Product Judgments and Decisions

Authors: Eun Young Park, Jongwon Park

Abstract:

A physical touch of a product can provide ample diagnostic information about the product attributes and quality. However, consumers’ product judgments and purchases can be erroneously influenced by non-diagnostic haptic information. For example, consumers’ evaluations of the coffee they drink could be affected by the heaviness of a cup that is used for just serving the coffee. This important issue has received little attention in prior research. The present research contributes to the literature by identifying when and how non-diagnostic haptic information can have an influence and why such influence occurs. Specifically, five studies experimentally varied the content of non-diagnostic haptic information, such as the weight of a cup (heavy vs. light) and the texture of a cup holder (smooth vs. rough), and then assessed the impact of the manipulation on product judgments and decisions. Results show that non-diagnostic haptic information has a biasing impact on consumer judgments. For example, the heavy (vs. light) cup increases consumers’ perception of the richness of coffee in it, and the rough (vs. smooth) texture of a cup holder increases the perception of the healthfulness of fruit juice in it, which in turn increases consumers’ purchase intentions of the product. When consumers are cognitively distracted during the touch experience, the impact of the content of haptic information is no longer evident, but the valence (positive vs. negative) of the haptic experience influences product judgments. However, consumers are able to avoid the impact of non-diagnostic haptic information, if and only if they are both knowledgeable about the product category and undistracted from processing the touch experience. In sum, the nature of the influence by non-diagnostic haptic information (i.e., assimilation effect vs. contrast effect vs. null effect) is determined by the content and valence of haptic information, the relative impact of which depends on whether consumers can identify the content and source of the haptic information. Theoretically, to our best knowledge, this research is the first to document the empirical evidence of the interplay between cognitive and affective processes that determines the impact of non-diagnostic haptic information. Managerial implications are discussed.

Keywords: consumer behavior, haptic information, product judgments, touch effect

Procedia PDF Downloads 174
933 Studies on the Bioactivity of Different Solvents Extracts of Selected Marine Macroalgae against Fish Pathogens

Authors: Mary Ghobrial, Sahar Wefky

Abstract:

Marine macroalgae have proven to be rich source of bioactive compounds with biomedical potential, not only for human but also for veterinary medicine. Emergence of microbial disease in aquaculture industries implies serious loses. Usage of commercial antibiotics for fish disease treatment produces undesirable side effects. Marine organisms are a rich source of structurally novel biologically active metabolites. Competition for space and nutrients led to the evolution of antimicrobial defense strategies in the aquatic environment. The interest in marine organisms as a potential and promising source of pharmaceutical agents has increased in the last years. Many bioactive and pharmacologically active substances have been isolated from microalgae. Compounds with antibacterial, antifungal and antiviral activities have been also detected in green, brown and red algae. Selected species of marine benthic algae belonging to the Phaeophyta and Rhodophyta, collected from different coastal areas of Alexandria (Egypt), were investigated for their antibacterial and antifungal, activities. Macroalgae samples were collected during low tide from the Alexandria Mediterranean coast. Samples were air dried under shade at room temperature. The dry algae were ground, using electric mixer grinder. They were soaked in 10 ml of each of the solvents acetone, ethanol, methanol and hexane. Antimicrobial activity was evaluated using well-cut diffusion technique In vitro screening of organic solvent extracts from the marine macroalgae Laurencia pinnatifida, Pterocladia capillaceae, Stepopodium zonale, Halopteris scoparia and Sargassum hystrix, showed specific activity in inhibiting the growth of five virulent strains of bacteria pathogenic to fish Pseudomonas fluorescens, Aeromonas hydrophila, Vibrio anguillarum, V. tandara, Escherichia coli and two fungi Aspergillus flavus and A. niger. Results showed that, acetone and ethanol extracts of all test macroalgae exhibited antibacterial activity, while acetone extract of the brown Sargassum hystrix displayed the highest antifungal activity. The extracts of seaweeds inhibited bacteria more strongly than fungi and species of the Rhodophyta showed the greatest activity against the bacteria rather than fungi tested. The gas liquid chromatography coupled with mass spectrometry detection technique allows good qualitative and quantitative analysis of the fractionated extracts with high sensitivity to the smaller amounts of components. Results indicated that, the main common component in the acetone extracts of L. pinnatifida and P. capillacea is 4-hydroxy-4-methyl2-pentanone representing 64.38 and 58.60%. Thus, the extracts derived from the red macroalgae were more efficient than those obtained from the brown macroalgae in combating bacterial pathogens rather than pathogenic fungi. The most preferred species over all was the red Laurencia pinnatifida. In conclusion, the present study provides the potential of red and brown macroalgae extracts for development of anti-pathogenic agents for use in fish aquaculture.

Keywords: bacteria, fungi, extracts, solvents

Procedia PDF Downloads 437
932 Review on Future Economic Potential Stems from Global Electronic Waste Generation and Sustainable Recycling Practices.

Authors: Shamim Ahsan

Abstract:

Abstract Global digital advances associated with consumer’s strong inclination for the state of art digital technologies is causing overwhelming social and environmental challenges for global community. During recent years not only economic advances of electronic industries has taken place at steadfast rate, also the generation of e-waste outshined the growth of any other types of wastes. The estimated global e-waste volume is expected to reach 65.4 million tons annually by 2017. Formal recycling practices in developed countries are stemming economic liability, opening paths for illegal trafficking to developing countries. Informal crude management of large volume of e-waste is transforming into an emergent environmental and health challenge in. Contrariwise, in several studies formal and informal recycling of e-waste has also exhibited potentials for economic returns both in developed and developing countries. Some research on China illustrated that from large volume of e-wastes generation there are recycling potential in evolving from ∼16 (10−22) billion US$ in 2010, to an anticipated ∼73.4 (44.5−103.4) billion US$ by 2030. While in another study, researcher found from an economic analysis of 14 common categories of waste electric and electronic equipment (WEEE) the overall worth is calculated as €2.15 billion to European markets, with a potential rise to €3.67 billion as volumes increase. These economic returns and environmental protection approaches are feasible only when sustainable policy options are embraced with stricter regulatory mechanism. This study will critically review current researches to stipulate how global e-waste generation and sustainable e-waste recycling practices demonstrate future economic development potential in terms of both quantity and processing capacity, also triggering complex some environmental challenges.

Keywords: E-Waste, , Generation, , Economic Potential, Recycling

Procedia PDF Downloads 305
931 Multi-scale Spatial and Unified Temporal Feature-fusion Network for Multivariate Time Series Anomaly Detection

Authors: Hang Yang, Jichao Li, Kewei Yang, Tianyang Lei

Abstract:

Multivariate time series anomaly detection is a significant research topic in the field of data mining, encompassing a wide range of applications across various industrial sectors such as traffic roads, financial logistics, and corporate production. The inherent spatial dependencies and temporal characteristics present in multivariate time series introduce challenges to the anomaly detection task. Previous studies have typically been based on the assumption that all variables belong to the same spatial hierarchy, neglecting the multi-level spatial relationships. To address this challenge, this paper proposes a multi-scale spatial and unified temporal feature fusion network, denoted as MSUT-Net, for multivariate time series anomaly detection. The proposed model employs a multi-level modeling approach, incorporating both temporal and spatial modules. The spatial module is designed to capture the spatial characteristics of multivariate time series data, utilizing an adaptive graph structure learning model to identify the multi-level spatial relationships between data variables and their attributes. The temporal module consists of a unified temporal processing module, which is tasked with capturing the temporal features of multivariate time series. This module is capable of simultaneously identifying temporal dependencies among different variables. Extensive testing on multiple publicly available datasets confirms that MSUT-Net achieves superior performance on the majority of datasets. Our method is able to model and accurately detect systems data with multi-level spatial relationships from a spatial-temporal perspective, providing a novel perspective for anomaly detection analysis.

Keywords: data mining, industrial system, multivariate time series, anomaly detection

Procedia PDF Downloads 16