Search results for: single error upset
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6312

Search results for: single error upset

2082 Experimental Investigation on the Shear Strength Parameters of Sand-Slag Mixtures

Authors: Ayad Salih Sabbar, Amin Chegenizadeh, Hamid Nikraz

Abstract:

Utilizing waste materials in civil engineering applications has a positive influence on the environment by reducing carbon dioxide emissions and issues associated with waste disposal. Granulated blast furnace slag (GBFS) is a by-product of the iron and steel industry, with millions of tons of slag being annually produced worldwide. Slag has been widely used in structural engineering and for stabilizing clay soils; however, studies on the effect of slag on sandy soils are scarce. This article investigates the effect of slag content on shear strength parameters through direct shear tests and unconsolidated undrained triaxial tests on mixtures of Perth sand and slag. For this purpose, sand-slag mixtures, with slag contents of 2%, 4%, and 6% by weight of samples, were tested with direct shear tests under three normal stress values, namely 100 kPa, 150 kPa, and 200 kPa. Unconsolidated undrained triaxial tests were performed under a single confining pressure of 100 kPa and relative density of 80%. The internal friction angles and shear stresses of the mixtures were determined via the direct shear tests, demonstrating that shear stresses increased with increasing normal stress and the internal friction angles and cohesion increased with increasing slag. There were no significant differences in shear stresses parameters when slag content rose from 4% to 6%. The unconsolidated undrained triaxial tests demonstrated that shear strength increased with increasing slag content.

Keywords: direct shear, shear strength, slag, UU test

Procedia PDF Downloads 476
2081 Modelling and Control of Binary Distillation Column

Authors: Narava Manose

Abstract:

Distillation is a very old separation technology for separating liquid mixtures that can be traced back to the chemists in Alexandria in the first century A. D. Today distillation is the most important industrial separation technology. By the eleventh century, distillation was being used in Italy to produce alcoholic beverages. At that time, distillation was probably a batch process based on the use of just a single stage, the boiler. The word distillation is derived from the Latin word destillare, which means dripping or trickling down. By at least the sixteenth century, it was known that the extent of separation could be improved by providing multiple vapor-liquid contacts (stages) in a so called Rectifactorium. The term rectification is derived from the Latin words rectefacere, meaning to improve. Modern distillation derives its ability to produce almost pure products from the use of multi-stage contacting. Throughout the twentieth century, multistage distillation was by far the most widely used industrial method for separating liquid mixtures of chemical components.The basic principle behind this technique relies on the different boiling temperatures for the various components of the mixture, allowing the separation between the vapor from the most volatile component and the liquid of other(s) component(s). •Developed a simple non-linear model of a binary distillation column using Skogestad equations in Simulink. •We have computed the steady-state operating point around which to base our analysis and controller design. However, the model contains two integrators because the condenser and reboiler levels are not controlled. One particular way of stabilizing the column is the LV-configuration where we use D to control M_D, and B to control M_B; such a model is given in cola_lv.m where we have used two P-controllers with gains equal to 10.

Keywords: modelling, distillation column, control, binary distillation

Procedia PDF Downloads 272
2080 Human Gesture Recognition for Real-Time Control of Humanoid Robot

Authors: S. Aswath, Chinmaya Krishna Tilak, Amal Suresh, Ganesh Udupa

Abstract:

There are technologies to control a humanoid robot in many ways. But the use of Electromyogram (EMG) electrodes has its own importance in setting up the control system. The EMG based control system helps to control robotic devices with more fidelity and precision. In this paper, development of an electromyogram based interface for human gesture recognition for the control of a humanoid robot is presented. To recognize control signs in the gestures, a single channel EMG sensor is positioned on the muscles of the human body. Instead of using a remote control unit, the humanoid robot is controlled by various gestures performed by the human. The EMG electrodes attached to the muscles generates an analog signal due to the effect of nerve impulses generated on moving muscles of the human being. The analog signals taken up from the muscles are supplied to a differential muscle sensor that processes the given signal to generate a signal suitable for the microcontroller to get the control over a humanoid robot. The signal from the differential muscle sensor is converted to a digital form using the ADC of the microcontroller and outputs its decision to the CM-530 humanoid robot controller through a Zigbee wireless interface. The output decision of the CM-530 processor is sent to a motor driver in order to control the servo motors in required direction for human like actions. This method for gaining control of a humanoid robot could be used for performing actions with more accuracy and ease. In addition, a study has been conducted to investigate the controllability and ease of use of the interface and the employed gestures.

Keywords: electromyogram, gesture, muscle sensor, humanoid robot, microcontroller, Zigbee

Procedia PDF Downloads 403
2079 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 65
2078 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain

Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA

Abstract:

In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.

Keywords: BER, DWT, extreme leaning machine (ELM), PSNR

Procedia PDF Downloads 308
2077 Evaluation of a Potential Metabolism-Mediated Drug-Drug Interaction between Carvedilol and Fluvoxamine in Rats

Authors: Ana-Maria Gheldiu, Bianca M. Abrudan, Maria A. Neag, Laurian Vlase, Dana M. Muntean

Abstract:

Background information: The objective of this study was to investigate the effect of multiple-dose fluvoxamine on the pharmacokinetic profile of single-dose carvedilol in rats, in order to evaluate this possible drug-drug pharmacokinetic interaction. Methods: A preclinical study, in 28 white male Wistar rats, was conducted. Each rat was cannulated on the femoral vein, prior to being connected to BASi Culex ABC®. Carvedilol was orally administrated in rats (3.57 mg/kg body mass (b.m.)) in the absence of fluvoxamine or after a pre-treatment with multiple oral doses of fluvoxamine (14.28 mg/kg b.m.). The plasma concentrations of carvedilol were estimated by high performance liquid chromatography-tandem mass spectrometry. The pharmacokinetic parameters of carvedilol were analyzed by non-compartmental method. Results: After carvediol co-administration with fluvoxamine, an approximately 2-fold increase in the exposure of carvedilol was observed, considering the significantly elevated value of the total area under the concentration versus time curve (AUC₀₋∞). Moreover, an increase by approximately 145% of the peak plasma concentration was found, as well as an augmentation by approximately 230% of the half life time of carvedilol was observed. Conclusion: Fluvoxamine co-administration led to a significant alteration of carvedilol’s pharmacokinetic profile in rats, these effects could be explained by the existence of a drug-drug interaction mediated by CYP2D6 inhibition. Acknowledgement: This work was supported by CNCS Romania – project PNII-RU-TE-2014-4-0242.

Keywords: carvedilol, fluvoxamine, drug-drug pharmacokinetic interaction, rats

Procedia PDF Downloads 268
2076 Incidence, Pattern and Risk Factors of Congenial Heart Diseases in Neonates in a Tertiary Care Hospital, Egyptian Study

Authors: Gehan Hussein, Hams Ahmad, Baher Matta, Yasmeen Mansi, Mohamad Fawzi

Abstract:

Background: Congenital heart disease (CHD) is a common problem worldwide with variable incidence in different countries. The exact etiology is unknown, suggested to be multifactorial. We aimed to study the incidence of various CHD in a neonatal intensive care unit (NICU) in a tertiary care hospital in Egypt and the possible associations with variable risk factors. Methods: Prospective study was conducted over a period of one year (2013 /2014) at NICU KasrAlAini School of Medicine, Cairo University. Questionnaire about possible maternal and/or paternal risk factors for CHD, clinical examination, bedside echocardiography were done. Cases were classified into groups: group 1 without CHD and group 2 with CHD. Results: from 723 neonates admitted to NICU, 180 cases were proved to have CHD, 58 % of them were males. patent ductus arteriosus(PDA) was the most common CHD (70%), followed by an atrial septal defect (ASD8%), while Fallot tetralogy and single ventricle were the least common (0.45 %) for each. CHD was found in 30 % of consanguineous parents Maternal age ≥ 35 years at the time of conception was associated with increased incidence of PDA (p= 0.45 %). Maternal diabetes and insulin intake were significantly associated with cases of CHD (p=0.02 &0.001 respectively), maternal hypertension and hypothyroidism were both associated with VSD, but the difference did not reach statistical significance (P=0.36 &0.44respectively). Maternal passive smoking was significantly associated with PDA (p=0.03). Conclusion: The most frequent CHD in the studied population was PDA, followed by ASD. Maternal conditions as diabetes was associated with VSD occurrence.

Keywords: NICU, risk factors, congenital heart disease, echocardiography

Procedia PDF Downloads 186
2075 An Explanatory Study into the Information-Seeking Behaviour of Egyptian Beggars

Authors: Essam Mansour

Abstract:

The key purpose of this study is to provide first-hand information about beggars in Egypt, especially from the perspective of their information seeking behaviour including their information needs. The researcher tries to investigate the information-seeking behaviour of Egyptian beggars with regard to their thoughts, perceptions, motivations, attitudes, habits, preferences as well as challenges that may impede their use of information. The research methods used were an adapted form of snowball sampling of a heterogeneous demographic group of participants in the beggary activity in Egypt. This sampling was used to select focus groups to explore a range of relevant issues. Data on the demographic characteristics of the Egyptian beggars showed that they tend to be men, mostly with no formal education, with an average age around 30s, labeled as low-income persons, mostly single and mostly Muslims. A large number of Egyptian beggars were seeking for information to meet their basic needs as well as their daily needs, although some of them were not able to identify their information needs clearly. The information-seeking behaviour profile of a very large number of Egyptian beggars indicated a preference for informal sources of information over formal ones to solve different problems and meet the challenges they face during their beggary activity depending on assistive devices, such as mobile phones. The high degree of illiteracy and the lack of awareness about the basic rights of information as well as information needs were the most important problems Egyptian beggars face during accessing information. The study recommended further research to be conducted about the role of the library in the education of beggars. It also recommended that beggars’ awareness about their information rights should be promoted through educational programs that help them value the role of information in their life.

Keywords: user studies, information-seeking behaviour, information needs, information sources, beggars, Egypt

Procedia PDF Downloads 314
2074 Supplier Risk Management: A Multivariate Statistical Modelling and Portfolio Optimization Based Approach for Supplier Delivery Performance Development

Authors: Jiahui Yang, John Quigley, Lesley Walls

Abstract:

In this paper, the authors develop a stochastic model regarding the investment in supplier delivery performance development from a buyer’s perspective. The authors propose a multivariate model through a Multinomial-Dirichlet distribution within an Empirical Bayesian inference framework, representing both the epistemic and aleatory uncertainties in deliveries. A closed form solution is obtained and the lower and upper bound for both optimal investment level and expected profit under uncertainty are derived. The theoretical properties provide decision makers with useful insights regarding supplier delivery performance improvement problems where multiple delivery statuses are involved. The authors also extend the model from a single supplier investment into a supplier portfolio, using a Lagrangian method to obtain a theoretical expression for an optimal investment level and overall expected profit. The model enables a buyer to know how the marginal expected profit/investment level of each supplier changes with respect to the budget and which supplier should be invested in when additional budget is available. An application of this model is illustrated in a simulation study. Overall, the main contribution of this study is to provide an optimal investment decision making framework for supplier development, taking into account multiple delivery statuses as well as multiple projects.

Keywords: decision making, empirical bayesian, portfolio optimization, supplier development, supply chain management

Procedia PDF Downloads 284
2073 Comparison of Polyphonic Profile of a Berry from Two Different Sources, Using an Optimized Extraction Method

Authors: G. Torabian, A. Fathi, P. Valtchev, F. Dehghani

Abstract:

The superior polyphenol content of Sambucus nigra berries has high health potentials for the production of nutraceutical products. Numerous factors influence the polyphenol content of the final products including the berries’ source and the subsequent processing production steps. The aim of this study is to compare the polyphenol content of berries from two different sources and also to optimise the polyphenol extraction process from elderberries. Berries from source B obtained more acceptable physical properties than source A; a single berry from source B was double in size and weight (both wet and dry weight) compared with a source A berry. Despite the appropriate physical characteristics of source B berries, their polyphenolic profile was inferior; as source A berries had 2.3 fold higher total anthocyanin content, and nearly two times greater total phenolic content and total flavonoid content compared to source B. Moreover, the result of this study showed that almost 50 percent of the phenolic content of berries are entrapped within their skin and pulp that potentially cannot be extracted by press juicing. To address this challenge and to increase the total polyphenol yield of the extract, we used cold-shock blade grinding method to break the cell walls. The result of this study showed that using cultivars with higher phenolic content as well as using the whole fruit including juice, skin and pulp can increase polyphenol yield significantly; and thus, may boost the potential of using elderberries as therapeutic products.

Keywords: different sources, elderberry, grinding, juicing, polyphenols

Procedia PDF Downloads 290
2072 A Methodology for Automatic Diversification of Document Categories

Authors: Dasom Kim, Chen Liu, Myungsu Lim, Su-Hyeon Jeon, ByeoungKug Jeon, Kee-Young Kwahk, Namgyu Kim

Abstract:

Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we previously proposed a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. In this paper, we design a survey-based verification scenario for estimating the accuracy of our automatic categorization methodology.

Keywords: big data analysis, document classification, multi-category, text mining, topic analysis

Procedia PDF Downloads 269
2071 The Use of Corpora in Improving Modal Verb Treatment in English as Foreign Language Textbooks

Authors: Lexi Li, Vanessa H. K. Pang

Abstract:

This study aims to demonstrate how native and learner corpora can be used to enhance modal verb treatment in EFL textbooks in mainland China. It contributes to a corpus-informed and learner-centered design of grammar presentation in EFL textbooks that enhances the authenticity and appropriateness of textbook language for target learners. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the 'secondary school' section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was analyzed in terms of the use (distributional features, semantic functions, and co-occurring constructions) and the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The analysis of distribution indicates several discrepancies between the textbook corpus and BNCS2014. The first four most frequent modal verbs in BNCS2014 are can, would, will, could, while can, will, should, could are the top four in the textbooks. Most strikingly, there is an unusually high proportion of can (41.1%) in the textbooks. The results on different meanings shows that will, would and must are the most problematic. For example, for will, the textbooks contain 20% more occurrences of 'volition' and 20% less of 'prediction' than those in BNCS2014. Regarding co-occurring structures, the textbooks over-represented the structure 'modal +do' across the nine modal verbs. Another major finding is that the structure of 'modal +have done' that frequently co-occur with could, would, should, and must is underused in textbooks. Besides, these four modal verbs are the most difficult for learners, as the error analysis shows. This study demonstrates how the synergy of native and learner corpora can be harnessed to improve EFL textbook presentation of modal verbs in a way that textbooks can provide not only authentic language used in natural discourse but also appropriate design tailed for the needs of target learners.

Keywords: English as Foreign Language, EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 138
2070 Observation of the Flow Behavior for a Rising Droplet in a Mini-Slot

Authors: H. Soltani, J. Hadfield, M. Redmond, D. S. Nobes

Abstract:

The passage of oil droplets through a vertical mini-slot were investigated in this study. Oil-in-water emulsion can undergo coalescence of finer oil droplets forming droplets of a size that need to be considered individually. This occurs in a number of industrial processes and has important consequences at a scale where both body and surfaces forces are relevant. In the study, two droplet diameters of smaller than the slot width and a relatively larger diameter where the oil droplet can interact directly with the slot wall were generated. To monitor fluid motion, a particle shadow velocimetry (PSV) imaging technique was used to study fluid flow motion inside and around a single oil droplet rising in a net co-flow. The droplet was a transparent canola oil and the surrounding working fluid was glycerol, adjusted to allow a matching of refractive index between the two fluids. Particles seeded in both fluids were observed with the PSV system allowing the capture of the velocity field both within the droplet and in the surrounds. The effect of droplet size on the droplet internal circulation was observed. Part of the study was related the potential generation of flow structures, such as von Karman vortex shedding already observed in rising droplets in infinite reservoirs and their interaction with the mini-channel. Results show that two counter-rotating vortices exist inside the droplets as they pass through slot. The vorticity map analysis shows that the droplet of relatively larger size has a stronger internal circulation.

Keywords: rising droplet, rectangular orifice, particle shadow velocimetry, match refractive index

Procedia PDF Downloads 168
2069 High-Dimensional Single-Cell Imaging Maps Inflammatory Cell Types in Pulmonary Arterial Hypertension

Authors: Selena Ferrian, Erin Mccaffrey, Toshie Saito, Aiqin Cao, Noah Greenwald, Mark Robert Nicolls, Trevor Bruce, Roham T. Zamanian, Patricia Del Rosario, Marlene Rabinovitch, Michael Angelo

Abstract:

Recent experimental and clinical observations are advancing immunotherapies to clinical trials in pulmonary arterial hypertension (PAH). However, comprehensive mapping of the immune landscape in pulmonary arteries (PAs) is necessary to understand how immune cell subsets interact to induce pulmonary vascular pathology. We used multiplexed ion beam imaging by time-of-flight (MIBI-TOF) to interrogate the immune landscape in PAs from idiopathic (IPAH) and hereditary (HPAH) PAH patients. Massive immune infiltration in I/HPAH was observed with intramural infiltration linked to PA occlusive changes. The spatial context of CD11c+DCs expressing SAMHD1, TIM-3 and IDO-1 within immune-enriched microenvironments and neutrophils were associated with greater immune activation in HPAH. Furthermore, CD11c-DC3s (mo-DC-like cells) within a smooth muscle cell (SMC) enriched microenvironment were linked to vessel score, proliferating SMCs, and inflamed endothelial cells. Experimental data in cultured cells reinforced a causal relationship between neutrophils and mo-DCs in mediating pulmonary arterial SMC proliferation. These findings merit consideration in developing effective immunotherapies for PAH.

Keywords: pulmonary arterial hypertension, vascular remodeling, indoleamine 2-3-dioxygenase 1 (IDO-1), neutrophils, monocyte-derived dendritic cells, BMPR2 mutation, interferon gamma (IFN-γ)

Procedia PDF Downloads 170
2068 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Iran: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: Crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Iran using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in VECM suggests that all energy consumption variables in this study have significant impacts on GDP in the long term. The consumption of petroleum products and the direct combustion of crude oil and natural gas decrease GDP, while the coal and electricity use enhanced the GDP between 1980-2010 in Iran. In the short term, only electricity use enhances the GDP as well as its long-run effects. All variables of this study, except the CO2 emissions, show significant effects on the GDP in the country for the long term. The long-run equilibrium in VECM suggests that the consumption of petroleum products and the direct combustion of crude oil and natural gas use have positive impacts on the GDP while the consumptions of electricity and coal have adverse impacts on the GDP in the long term. In the short run, electricity use enhances the GDP over period of 1980-2010 in Iran. Overall, the results partly support arguments that there are relationships between energy use and economic output, but the associations can be differed by the sources of energy in the case of Iran over period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.

Keywords: CO2 emissions, energy consumption, GDP, Iran, time series analysis

Procedia PDF Downloads 590
2067 Configuration as a Service in Multi-Tenant Enterprise Resource Planning System

Authors: Mona Misfer Alshardan, Djamal Ziani

Abstract:

Enterprise resource planning (ERP) systems are the organizations tickets to the global market. With the implementation of ERP, organizations can manage and coordinate all functions, processes, resources and data from different departments by a single software. However, many organizations consider the cost of traditional ERP to be expensive and look for alternative affordable solutions within their budget. One of these alternative solutions is providing ERP over a software as a service (SaaS) model. This alternative could be considered as a cost effective solution compared to the traditional ERP system. A key feature of any SaaS system is the multi-tenancy architecture where multiple customers (tenants) share the system software. However, different organizations have different requirements. Thus, the SaaS developers accommodate each tenant’s unique requirements by allowing tenant-level customization or configuration. While customization requires source code changes and in most cases a programming experience, the configuration process allows users to change many features within a predefined scope in an easy and controlled manner. The literature provides many techniques to accomplish the configuration process in different SaaS systems. However, the nature and complexity of SaaS ERP needs more attention to the details regarding the configuration process which is merely described in previous researches. Thus, this research is built on strong knowledge regarding the configuration in SaaS to define specifically the configuration borders in SaaS ERP and to design a configuration service with the consideration of the different configuration aspects. The proposed architecture will ensure the easiness of the configuration process by using wizard technology. Also, the privacy and performance are guaranteed by adopting the databases isolation technique.

Keywords: configuration, software as a service, multi-tenancy, ERP

Procedia PDF Downloads 392
2066 Study of Aqueous Solutions: A Dielectric Spectroscopy Approach

Authors: Kumbharkhane Ashok

Abstract:

The time domain dielectric relaxation spectroscopy (TDRS) probes the interaction of a macroscopic sample with a time-dependent electrical field. The resulting complex permittivity spectrum, characterizes amplitude (voltage) and time scale of the charge-density fluctuations within the sample. These fluctuations may arise from the reorientation of the permanent dipole moments of individual molecules or from the rotation of dipolar moieties in flexible molecules, like polymers. The time scale of these fluctuations depends on the sample and its relative relaxation mechanism. Relaxation times range from some picoseconds in low viscosity liquids to hours in glasses, Therefore the DRS technique covers an extensive dynamical process, its corresponding frequency range from 10-4 Hz to 1012 Hz. This inherent ability to monitor the cooperative motion of molecular ensemble distinguishes dielectric relaxation from methods like NMR or Raman spectroscopy which yield information on the motions of individual molecules. An experimental set up for Time Domain Reflectometry (TDR) technique from 10 MHz to 30 GHz has been developed for the aqueous solutions. This technique has been very simple and covers a wide band of frequencies in the single measurement. Dielectric Relaxation Spectroscopy is especially sensitive to intermolecular interactions. The complex permittivity spectra of aqueous solutions have been fitted using Cole-Davidson (CD) model to determine static dielectric constants and relaxation times for entire concentrations. The heterogeneous molecular interactions in aqueous solutions have been discussed through Kirkwood correlation factor and excess properties.

Keywords: liquid, aqueous solutions, time domain reflectometry

Procedia PDF Downloads 438
2065 Evaluation of DNA Oxidation and Chemical DNA Damage Using Electrochemiluminescent Enzyme/DNA Microfluidic Array

Authors: Itti Bist, Snehasis Bhakta, Di Jiang, Tia E. Keyes, Aaron Martin, Robert J. Forster, James F. Rusling

Abstract:

DNA damage from metabolites of lipophilic drugs and pollutants, generated by enzymes, represents a major toxicity pathway in humans. These metabolites can react with DNA to form either 8-oxo-7,8-dihydro-2-deoxyguanosine (8-oxodG), which is the oxidative product of DNA or covalent DNA adducts, both of which are genotoxic and hence considered important biomarkers to detect cancer in humans. Therefore, detecting reactions of metabolites with DNA is an effective approach for the safety assessment of new chemicals and drugs. Here we describe a novel electrochemiluminescent (ECL) sensor array which can detect DNA oxidation and chemical DNA damage in a single array, facilitating a more accurate diagnostic tool for genotoxicity screening. Layer-by-layer assembly of DNA and enzyme are assembled on the pyrolytic graphite array which is housed in a microfluidic device for sequential detection of two type of the DNA damages. Multiple enzyme reactions are run on test compounds using the array, generating toxic metabolites in situ. These metabolites react with DNA in the films to cause DNA oxidation and chemical DNA damage which are detected by ECL generating osmium compound and ruthenium polymer, respectively. The method is further validated by the formation of 8-oxodG and DNA adduct using similar films of DNA/enzyme on magnetic bead biocolloid reactors, hydrolyzing the DNA, and analyzing by liquid chromatography-mass spectrometry (LC-MS). Hence, this combined DNA/enzyme array/LC-MS approach can efficiently explore metabolic genotoxic pathways for drugs and environmental chemicals.

Keywords: biosensor, electrochemiluminescence, DNA damage, microfluidic array

Procedia PDF Downloads 363
2064 The Effect of Crack Size, Orientation and Number on the Elastic Modulus of a Cracked Body

Authors: Mark T. Hanson, Alan T. Varughese

Abstract:

Osteoporosis is a disease affecting bone quality which in turn can increase the risk of low energy fractures. Treatment of osteoporosis using Bisphosphonates has the beneficial effect of increasing bone mass while at the same time has been linked to the formation of atypical femoral fractures. This has led to the increased study of micro-fractures in bones of patients using Bisphosphonate treatment. One of the mechanics related issues which have been identified in this regard is the loss in stiffness of bones containing one or many micro-fractures. Different theories have been put forth using fracture mechanics to determine the effect of crack presence on elastic properties such as modulus. However, validation of these results in a deterministic way has not been forthcoming. The present analysis seeks to provide this deterministic evaluation of fracture’s effect on the elastic modulus. In particular, the effect of crack size, crack orientation and crack number on elastic modulus is investigated. In particular, the Finite Element method is used to explicitly determine the elastic modulus reduction caused by the presence of cracks in a representative volume element. Single cracks of various lengths and orientations are examined as well as cases of multiple cracks. Cracks in tension as well as under shear stress are considered. Although the focus is predominantly two-dimensional, some three-dimensional results are also presented. The results obtained show the explicit reduction in modulus caused by the parameters of crack size, orientation and number noted above. The present results allow the interpretation of the various theories which currently exist in the literature.

Keywords: cracks, elastic, fracture, modulus

Procedia PDF Downloads 107
2063 Landsat Data from Pre Crop Season to Estimate the Area to Be Planted with Summer Crops

Authors: Valdir Moura, Raniele dos Anjos de Souza, Fernando Gomes de Souza, Jose Vagner da Silva, Jerry Adriani Johann

Abstract:

The estimate of the Area of Land to be planted with annual crops and its stratification by the municipality are important variables in crop forecast. Nowadays in Brazil, these information’s are obtained by the Brazilian Institute of Geography and Statistics (IBGE) and published under the report Assessment of the Agricultural Production. Due to the high cloud cover in the main crop growing season (October to March) it is difficult to acquire good orbital images. Thus, one alternative is to work with remote sensing data from dates before the crop growing season. This work presents the use of multitemporal Landsat data gathered on July and September (before the summer growing season) in order to estimate the area of land to be planted with summer crops in an area of São Paulo State, Brazil. Geographic Information Systems (GIS) and digital image processing techniques were applied for the treatment of the available data. Supervised and non-supervised classifications were used for data in digital number and reflectance formats and the multitemporal Normalized Difference Vegetation Index (NDVI) images. The objective was to discriminate the tracts with higher probability to become planted with summer crops. Classification accuracies were evaluated using a sampling system developed basically for this study region. The estimated areas were corrected using the error matrix derived from these evaluations. The classification techniques presented an excellent level according to the kappa index. The proportion of crops stratified by municipalities was derived by a field work during the crop growing season. These proportion coefficients were applied onto the area of land to be planted with summer crops (derived from Landsat data). Thus, it was possible to derive the area of each summer crop by the municipality. The discrepancies between official statistics and our results were attributed to the sampling and the stratification procedures. Nevertheless, this methodology can be improved in order to provide good crop area estimates using remote sensing data, despite the cloud cover during the growing season.

Keywords: area intended for summer culture, estimated area planted, agriculture, Landsat, planting schedule

Procedia PDF Downloads 149
2062 Tomato-Weed Classification by RetinaNet One-Step Neural Network

Authors: Dionisio Andujar, Juan lópez-Correa, Hugo Moreno, Angela Ri

Abstract:

The increased number of weeds in tomato crops highly lower yields. Weed identification with the aim of machine learning is important to carry out site-specific control. The last advances in computer vision are a powerful tool to face the problem. The analysis of RGB (Red, Green, Blue) images through Artificial Neural Networks had been rapidly developed in the past few years, providing new methods for weed classification. The development of the algorithms for crop and weed species classification looks for a real-time classification system using Object Detection algorithms based on Convolutional Neural Networks. The site study was located in commercial corn fields. The classification system has been tested. The procedure can detect and classify weed seedlings in tomato fields. The input to the Neural Network was a set of 10,000 RGB images with a natural infestation of Cyperus rotundus l., Echinochloa crus galli L., Setaria italica L., Portulaca oeracea L., and Solanum nigrum L. The validation process was done with a random selection of RGB images containing the aforementioned species. The mean average precision (mAP) was established as the metric for object detection. The results showed agreements higher than 95 %. The system will provide the input for an online spraying system. Thus, this work plays an important role in Site Specific Weed Management by reducing herbicide use in a single step.

Keywords: deep learning, object detection, cnn, tomato, weeds

Procedia PDF Downloads 101
2061 Time Parameter Based for the Detection of Catastrophic Faults in Analog Circuits

Authors: Arabi Abderrazak, Bourouba Nacerdine, Ayad Mouloud, Belaout Abdeslam

Abstract:

In this paper, a new test technique of analog circuits using time mode simulation is proposed for the single catastrophic faults detection in analog circuits. This test process is performed to overcome the problem of catastrophic faults being escaped in a DC mode test applied to the inverter amplifier in previous research works. The circuit under test is a second-order low pass filter constructed around this type of amplifier but performing a function that differs from that of the previous test. The test approach performed in this work is based on two key- elements where the first one concerns the unique square pulse signal selected as an input vector test signal to stimulate the fault effect at the circuit output response. The second element is the filter response conversion to a square pulses sequence obtained from an analog comparator. This signal conversion is achieved through a fixed reference threshold voltage of this comparison circuit. The measurement of the three first response signal pulses durations is regarded as fault effect detection parameter on one hand, and as a fault signature helping to hence fully establish an analog circuit fault diagnosis on another hand. The results obtained so far are very promising since the approach has lifted up the fault coverage ratio in both modes to over 90% and has revealed the harmful side of faults that has been masked in a DC mode test.

Keywords: analog circuits, analog faults diagnosis, catastrophic faults, fault detection

Procedia PDF Downloads 437
2060 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors

Authors: Jakob Krause

Abstract:

Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.

Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling

Procedia PDF Downloads 140
2059 Safety and Efficacy of Laparoscopic D2 Gastrectomy for Advanced Gastric Cancers Single Unit Experience

Authors: S. M. P Manjula, Ishara Amarathunga, Aryan Nath Koura, Jaideepraj Rao

Abstract:

Background: Laparoscopic D2 Gastrectomy for non metastatic advanced Gastric cancer (AGC) has become a controversial topic as there are confronting ideas from experts in the field. Lack of consensus are mainly due to non feasibility of the dissection and safety and efficacy. Method: Data from all D2 Gastrectomies performed (both Subtotal and Total Gastrectomies) in our unit from 2009 December to 2013 December were retrospectively analysed. Computor database was prospectively maintained. Pathological stage two A (iiA) and above considered advanced Gastric cancers, who underwent curative intent D2 Gastrectomy were included for analysis(n=46). Four patients excluded from the study as peritoneal fluid cytology came positive for cancer cells and one patient exempted as microscopic resection margin positive(R1) after curative resection. Thirty day morbidity and mortality, operative time, lymph nodes harvest and survival (disease free and overall) analyzed. Results: Complete curative resection achieved in 40 patients. Mean age of the study population was 62.2 (32-88) and male to female ratio was 23: 17. Thirty day mortality (1/40) and morbidity (6/40). Average operative time 203.7 minutes (185- 400) and average lymphnodes harvest was 40.5 (18-91). Disease free survival of the AGC in this study population was 16.75 months (1-49). Average hospital stay was 6.8 days (3-31). Conclusion: Laparoscopic dissection is effective feasible and safe in AGC.

Keywords: laparoscopy, advanced gastric cancer, safety, efficacy

Procedia PDF Downloads 332
2058 Population Structure Analysis of Pakistani Indigenous Cattle Population by Using High Density SNP Array

Authors: Hamid Mustafa, Huson J. Heather, Kim Eiusoo, McClure Matt, Khalid Javed, Talat Nasser Pasha, Afzal Ali1, Adeela Ajmal, Tad Sonstegard

Abstract:

Genetic differences associated with speciation, breed formation or local adaptation can help to preserve and effective utilization of animals in selection programs. Analyses of population structure and breed diversity have provided insight into the origin and evolution of cattle. In this study, we used a high-density panel of SNP markers to examine population structure and diversity among ten Pakistani indigenous cattle breeds. In total, 25 individuals from three cattle populations, including Achi (n=08), Bhagnari (n=04) and Cholistani (n=13) were genotyped for 777, 962 single nucleotide polymorphism (SNP) markers. Population structure was examined using the linkage model in the program STRUCTURE. After characterizing SNP polymorphism in the different populations, we performed a detailed analysis of genetic structure at both the individual and population levels. The whole-genome SNP panel identified several levels of population substructure in the set of examined cattle breeds. We further searched for spatial patterns of genetic diversity among these breeds under the recently developed spatial principal component analysis framework. Overall, such high throughput genotyping data confirmed a clear partitioning of the cattle genetic diversity into distinct breeds. The resulting complex historical origins associated with both natural and artificial selection have led to the differentiation of numerous different cattle breeds displaying a broad phenotypic variety over a short period of time.

Keywords: Pakistan, cattle, genetic diversity, population structure

Procedia PDF Downloads 613
2057 Optimization the Conditions of Electrophoretic Deposition Fabrication of Graphene-Based Electrode to Consider Applications in Electro-Optical Sensors

Authors: Sepehr Lajevardi Esfahani, Shohre Rouhani, Zahra Ranjbar

Abstract:

Graphene has gained much attention owing to its unique optical and electrical properties. Charge carriers in graphene sheets (GS) carry out a linear dispersion relation near the Fermi energy and behave as massless Dirac fermions resulting in unusual attributes such as the quantum Hall effect and ambipolar electric field effect. It also exhibits nondispersive transport characteristics with an extremely high electron mobility (15000 cm2/(Vs)) at room temperature. Recently, several progresses have been achieved in the fabrication of single- or multilayer GS for functional device applications in the fields of optoelectronic such as field-effect transistors ultrasensitive sensors and organic photovoltaic cells. In addition to device applications, graphene also can serve as reinforcement to enhance mechanical, thermal, or electrical properties of composite materials. Electrophoretic deposition (EPD) is an attractive method for development of various coatings and films. It readily applied to any powdered solid that forms a stable suspension. The deposition parameters were controlled in various thicknesses. In this study, the graphene electrodeposition conditions were optimized. The results were obtained from SEM, Ohm resistance measuring technique and AFM characteristic tests. The minimum sheet resistance of electrodeposited reduced graphene oxide layers is achieved at conditions of 2 V in 10 s and it is annealed at 200 °C for 1 minute.

Keywords: electrophoretic deposition (EPD), graphene oxide (GO), electrical conductivity, electro-optical devices

Procedia PDF Downloads 182
2056 Prevalence and Associated Factors of Attention Deficit Hyperactivity Disorder among Children Age 6 to 17 Years Old Living in Girja District, Oromia Regional State, Rural Ethiopia: Community Based Cross-Sectional Study

Authors: Hirbaye Mokona, Abebaw Gebeyehu, Aemro Zerihun

Abstract:

Introduction: Attention deficit hyperactivity disorder is serious public health problem affecting millions of children throughout the world. Method: A cross-sectional study conducted from May to June 2015 among children age 6 to 17 years living in rural area of Girja district. Multi-stage cluster sampling technique was used to select 1302 study participants. Disruptive Behavior Disorder rating scale was used to collect the data. Data were coded, entered and cleaned by Epi-Data version 3.1 and analyzed by SPSS version 20. Logistic regression analysis was used and Variables that have P-values less than 0.05 on multivariable logistic regression was considered as statistically significant. Results: Prevalence of Attention deficit hyperactivity disorder (ADHD) among children age 6 to 17 years was 7.3%. Being male [AOR=1.81, 95%CI: (1.13, 2.91)]; living with single parent [AOR=5.0, 95%CI: (2.35, 10.65)]; child birth order/rank [AOR=2.35, 95%CI: (1.30, 4.25)]; low family socio-economic status [AOR= 2.43, 95%CI: (1.29, 4.59)]; maternal alcohol/khat use during pregnancy [AOR=3.14, 95%CI: (1.37, 7.37)] and complication at delivery [AOR=3.56, 95%CI: (1.19, 10.64)] were more likely to develop Attention deficit hyperactivity disorder. Conclusion: In this study, the prevalence of Attention deficit hyperactivity disorder was similar with worldwide prevalence. Prevention and early management of its modifiable risk factors should be carryout alongside increasing community awareness.

Keywords: attention deficit hyperactivity disorder, ADHD, associated factors, children, prevalence

Procedia PDF Downloads 182
2055 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice

Authors: T. Ewetumo, K. D. Adedayo, Festus Ben

Abstract:

Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.

Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation

Procedia PDF Downloads 352
2054 Surface Modification of Titanium Alloy with Laser Treatment

Authors: Nassier A. Nassir, Robert Birch, D. Rico Sierra, S. P. Edwardson, G. Dearden, Zhongwei Guan

Abstract:

The effect of laser surface treatment parameters on the residual strength of titanium alloy has been investigated. The influence of the laser surface treatment on the bonding strength between the titanium and poly-ether-ketone-ketone (PEKK) surfaces was also evaluated and compared to those offered by titanium foils without surface treatment to optimize the laser parameters. Material characterization using an optical microscope was carried out to study the microstructure and to measure the mean roughness value of the titanium surface. The results showed that the surface roughness shows a significant dependency on the laser power parameters in which surface roughness increases with the laser power increment. Moreover, the results of the tensile tests have shown that there is no significant dropping in tensile strength for the treated samples comparing to the virgin ones. In order to optimize the laser parameter as well as the corresponding surface roughness, single-lap shear tests were conducted on pairs of the laser treated titanium stripes. The results showed that the bonding shear strength between titanium alloy and PEKK film increased with the surface roughness increment to a specific limit. After this point, it is interesting to note that there was no significant effect for the laser parameter on the bonding strength. This evidence suggests that it is not necessary to use very high power of laser to treat titanium surface to achieve a good bonding strength between titanium alloy and the PEKK film.

Keywords: bonding strength, laser surface treatment, PEKK, poly-ether-ketone-ketone, titanium alloy

Procedia PDF Downloads 335
2053 Happiness and Its Political Consequences: A Proposal for a Socially Constructed Object

Authors: Luciano E. Sewaybricker

Abstract:

Psychology has faced many challenges in order to claim its right to study happiness. Probably the major issue has been to present a clear definition of happiness, which has a long history outside the scientific field and has been used imprecisely in the daily life. Even after years of great improvement, different meanings of happiness still have been seen in academic studies. This scenario allows to question if any definition is consistent enough to sustain the recent findings of the psychological processes behind happiness. Moreover, does it make sense to seek a single definition of happiness? By investigating the history of happiness and the theoretical foundations of Positive Psychology, it can be advocated that it’s proper for happiness to be polysemic. Since Ancient Greece most attempts to outline happiness consists of an appreciation of the "best way to live" and consequently requires a delineation of the most important things in life. Besides this generic definition, it’s hard to find consensus about happiness. In fact, what and how much something will be considered important to happiness depend on social influence. This compels happiness to vary between groups, historical periods, and even for the same person over time. Therefore, the same psychological processes will not necessarily be behind all forms of happiness. Consequently, three assumptions should be considered when studying happiness: it’s intrinsic of happiness to be transitory and socially influenced; happiness refers not only to what is possible in the present, but also to an ideal future; when someone (including a scientist) talks about happiness they describe and prescribe a better way to live. Because any attempt to define happiness will be limited in space and time, it's more suitable to study its variations than its universalities. This may have considerable consequences to political agenda on happiness evaluation and maximization, like Gross National Happiness and utilitarian initiatives. Happiness policies should be understood as an arbitrary choice amongst all kinds of happiness and as prescriptive of what “the best way to live” should be.

Keywords: happiness, politics, positive psychology, well-being

Procedia PDF Downloads 255