Search results for: complex programming case study
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 54138

Search results for: complex programming case study

528 Quantitative Comparisons of Different Approaches for Rotor Identification

Authors: Elizabeth M. Annoni, Elena G. Tolkacheva

Abstract:

Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia that is a known prognostic marker for stroke, heart failure and death. Reentrant mechanisms of rotor formation, which are stable electrical sources of cardiac excitation, are believed to cause AF. No existing commercial mapping systems have been demonstrated to consistently and accurately predict rotor locations outside of the pulmonary veins in patients with persistent AF. There is a clear need for robust spatio-temporal techniques that can consistently identify rotors using unique characteristics of the electrical recordings at the pivot point that can be applied to clinical intracardiac mapping. Recently, we have developed four new signal analysis approaches – Shannon entropy (SE), Kurtosis (Kt), multi-scale frequency (MSF), and multi-scale entropy (MSE) – to identify the pivot points of rotors. These proposed techniques utilize different cardiac signal characteristics (other than local activation) to uncover the intrinsic complexity of the electrical activity in the rotors, which are not taken into account in current mapping methods. We validated these techniques using high-resolution optical mapping experiments in which direct visualization and identification of rotors in ex-vivo Langendorff-perfused hearts were possible. Episodes of ventricular tachycardia (VT) were induced using burst pacing, and two examples of rotors were used showing 3-sec episodes of a single stationary rotor and figure-8 reentry with one rotor being stationary and one meandering. Movies were captured at a rate of 600 frames per second for 3 sec. with 64x64 pixel resolution. These optical mapping movies were used to evaluate the performance and robustness of SE, Kt, MSF and MSE techniques with respect to the following clinical limitations: different time of recordings, different spatial resolution, and the presence of meandering rotors. To quantitatively compare the results, SE, Kt, MSF and MSE techniques were compared to the “true” rotor(s) identified using the phase map. Accuracy was calculated for each approach as the duration of the time series and spatial resolution were reduced. The time series duration was decreased from its original length of 3 sec, down to 2, 1, and 0.5 sec. The spatial resolution of the original VT episodes was decreased from 64x64 pixels to 32x32, 16x16, and 8x8 pixels by uniformly removing pixels from the optical mapping video.. Our results demonstrate that Kt, MSF and MSE were able to accurately identify the pivot point of the rotor under all three clinical limitations. The MSE approach demonstrated the best overall performance, but Kt was the best in identifying the pivot point of the meandering rotor. Artifacts mildly affect the performance of Kt, MSF and MSE techniques, but had a strong negative impact of the performance of SE. The results of our study motivate further validation of SE, Kt, MSF and MSE techniques using intra-atrial electrograms from paroxysmal and persistent AF patients to see if these approaches can identify pivot points in a clinical setting. More accurate rotor localization could significantly increase the efficacy of catheter ablation to treat AF, resulting in a higher success rate for single procedures.

Keywords: Atrial Fibrillation, Optical Mapping, Signal Processing, Rotors

Procedia PDF Downloads 301
527 Volatility Index, Fear Sentiment and Cross-Section of Stock Returns: Indian Evidence

Authors: Pratap Chandra Pati, Prabina Rajib, Parama Barai

Abstract:

The traditional finance theory neglects the role of sentiment factor in asset pricing. However, the behavioral approach to asset-pricing based on noise trader model and limit to arbitrage includes investor sentiment as a priced risk factor in the assist pricing model. Investor sentiment affects stock more that are vulnerable to speculation, hard to value and risky to arbitrage. It includes small stocks, high volatility stocks, growth stocks, distressed stocks, young stocks and non-dividend-paying stocks. Since the introduction of Chicago Board Options Exchange (CBOE) volatility index (VIX) in 1993, it is used as a measure of future volatility in the stock market and also as a measure of investor sentiment. CBOE VIX index, in particular, is often referred to as the ‘investors’ fear gauge’ by public media and prior literature. The upward spikes in the volatility index are associated with bouts of market turmoil and uncertainty. High levels of the volatility index indicate fear, anxiety and pessimistic expectations of investors about the stock market. On the contrary, low levels of the volatility index reflect confident and optimistic attitude of investors. Based on the above discussions, we investigate whether market-wide fear levels measured volatility index is priced factor in the standard asset pricing model for the Indian stock market. First, we investigate the performance and validity of Fama and French three-factor model and Carhart four-factor model in the Indian stock market. Second, we explore whether India volatility index as a proxy for fearful market-based sentiment indicators affect the cross section of stock returns after controlling for well-established risk factors such as market excess return, size, book-to-market, and momentum. Asset pricing tests are performed using monthly data on CNX 500 index constituent stocks listed on the National stock exchange of India Limited (NSE) over the sample period that extends from January 2008 to March 2017. To examine whether India volatility index, as an indicator of fear sentiment, is a priced risk factor, changes in India VIX is included as an explanatory variable in the Fama-French three-factor model as well as Carhart four-factor model. For the empirical testing, we use three different sets of test portfolios used as the dependent variable in the in asset pricing regressions. The first portfolio set is the 4x4 sorts on the size and B/M ratio. The second portfolio set is the 4x4 sort on the size and sensitivity beta of change in IVIX. The third portfolio set is the 2x3x2 independent triple-sorting on size, B/M and sensitivity beta of change in IVIX. We find evidence that size, value and momentum factors continue to exist in Indian stock market. However, VIX index does not constitute a priced risk factor in the cross-section of returns. The inseparability of volatility and jump risk in the VIX is a possible explanation of the current findings in the study.

Keywords: India VIX, Fama-French model, Carhart four-factor model, asset pricing

Procedia PDF Downloads 226
526 Solar Power Forecasting for the Bidding Zones of the Italian Electricity Market with an Analog Ensemble Approach

Authors: Elena Collino, Dario A. Ronzio, Goffredo Decimi, Maurizio Riva

Abstract:

The rapid increase of renewable energy in Italy is led by wind and solar installations. The 2017 Italian energy strategy foresees a further development of these sustainable technologies, especially solar. This fact has resulted in new opportunities, challenges, and different problems to deal with. The growth of renewables allows to meet the European requirements regarding energy and environmental policy, but these types of sources are difficult to manage because they are intermittent and non-programmable. Operationally, these characteristics can lead to instability on the voltage profile and increasing uncertainty on energy reserve scheduling. The increasing renewable production must be considered with more and more attention especially by the Transmission System Operator (TSO). The TSO, in fact, every day provides orders on energy dispatch, once the market outcome has been determined, on extended areas, defined mainly on the basis of power transmission limitations. In Italy, six market zone are defined: Northern-Italy, Central-Northern Italy, Central-Southern Italy, Southern Italy, Sardinia, and Sicily. An accurate hourly renewable power forecasting for the day-ahead on these extended areas brings an improvement both in terms of dispatching and reserve management. In this study, an operational forecasting tool of the hourly solar output for the six Italian market zones is presented, and the performance is analysed. The implementation is carried out by means of a numerical weather prediction model, coupled with a statistical post-processing in order to derive the power forecast on the basis of the meteorological projection. The weather forecast is obtained from the limited area model RAMS on the Italian territory, initialized with IFS-ECMWF boundary conditions. The post-processing calculates the solar power production with the Analog Ensemble technique (AN). This statistical approach forecasts the production using a probability distribution of the measured production registered in the past when the weather scenario looked very similar to the forecasted one. The similarity is evaluated for the components of the solar radiation: global (GHI), diffuse (DIF) and direct normal (DNI) irradiation, together with the corresponding azimuth and zenith solar angles. These are, in fact, the main factors that affect the solar production. Considering that the AN performance is strictly related to the length and quality of the historical data a training period of more than one year has been used. The training set is made by historical Numerical Weather Prediction (NWP) forecasts at 12 UTC for the GHI, DIF and DNI variables over the Italian territory together with corresponding hourly measured production for each of the six zones. The AN technique makes it possible to estimate the aggregate solar production in the area, without information about the technologic characteristics of the all solar parks present in each area. Besides, this information is often only partially available. Every day, the hourly solar power forecast for the six Italian market zones is made publicly available through a website.

Keywords: analog ensemble, electricity market, PV forecast, solar energy

Procedia PDF Downloads 126
525 Profiling of the Cell-Cycle Related Genes in Response to Efavirenz, a Non-Nucleoside Reverse Transcriptase Inhibitor in Human Lung Cancer

Authors: Rahaba Marima, Clement Penny

Abstract:

The Health-related quality of life (HRQoL) for HIV positive patients has improved since the introduction of the highly active antiretroviral treatment (HAART). However, in the present HAART era, HIV co-morbidities such as lung cancer, a non-AIDS (NAIDS) defining cancer have been documented to be on the rise. Under normal physiological conditions, cells grow, repair and proliferate through the cell-cycle as cellular homeostasis is important in the maintenance and proper regulation of tissues and organs. Contrarily, the deregulation of the cell-cycle is a hallmark of cancer, including lung cancer. The association between lung cancer and the use of HAART components such as Efavirenz (EFV) is poorly understood. This study aimed at elucidating the effects of EFV on the cell-cycle genes’ expression in lung cancer. For this purpose, the human cell-cycle gene array composed of 84 genes was evaluated on both normal lung fibroblasts (MRC-5) cells and adenocarcinoma (A549) lung cells, in response to 13µM EFV or 0.01% vehicle. The ±2 up or down fold change was used as a basis of target selection, with p < 0.05. Additionally, RT-qPCR was done to validate the gene array results. Next, In-silico bio-informatics tools, Search Tool for the Retrieval of Interacting Genes/Proteins (STRING), Reactome, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway and Ingenuity Pathway Analysis (IPA) were used for gene/gene interaction studies as well as to map the molecular and biological pathways influenced by the identified targets. Interestingly, the DNA damage response (DDR) pathway genes such as p53, Ataxia telangiectasia mutated and Rad3 related (ATR), Growth arrest and DNA damage inducible alpha (GADD45A), HUS1 checkpoint homolog (HUS1) and Role of radiation (RAD) genes were shown to be upregulated following EFV treatment, as revealed by STRING analysis. Additionally, functional enrichment analysis by the KEGG pathway revealed that most of the differentially expressed gene targets function at the cell-cycle checkpoint such as p21, Aurora kinase B (AURKB) and Mitotic Arrest Deficient-Like 2 (MAD2L2). Core analysis by IPA revealed that p53 downstream targets such as survivin, Bcl2, and cyclin/cyclin dependent kinases (CDKs) complexes are down-regulated, following exposure to EFV. Furthermore, Reactome analysis showed a significant increase in cellular response to stress genes, DNA repair genes, and apoptosis genes, as observed in both normal and cancerous cells. These findings implicate the genotoxic effects of EFV on lung cells, provoking the DDR pathway. Notably, the constitutive expression of this pathway (DDR) often leads to uncontrolled cell proliferation and eventually tumourigenesis, which could be the attribute of HAART components’ (such as EFV) effect on human cancers. Targeting the cell-cycle and its regulation holds a promising therapeutic intervention to the potential HAART associated carcinogenesis, particularly lung cancer.

Keywords: cell-cycle, DNA damage response, Efavirenz, lung cancer

Procedia PDF Downloads 120
524 ExactData Smart Tool For Marketing Analysis

Authors: Aleksandra Jonas, Aleksandra Gronowska, Maciej Ścigacz, Szymon Jadczak

Abstract:

Exact Data is a smart tool which helps with meaningful marketing content creation. It helps marketers achieve this by analyzing the text of an advertisement before and after its publication on social media sites like Facebook or Instagram. In our research we focus on four areas of natural language processing (NLP): grammar correction, sentiment analysis, irony detection and advertisement interpretation. Our research has identified a considerable lack of NLP tools for the Polish language, which specifically aid online marketers. In light of this, our research team has set out to create a robust and versatile NLP tool for the Polish language. The primary objective of our research is to develop a tool that can perform a range of language processing tasks in this language, such as sentiment analysis, text classification, text correction and text interpretation. Our team has been working diligently to create a tool that is accurate, reliable, and adaptable to the specific linguistic features of Polish, and that can provide valuable insights for a wide range of marketers needs. In addition to the Polish language version, we are also developing an English version of the tool, which will enable us to expand the reach and impact of our research to a wider audience. Another area of focus in our research involves tackling the challenge of the limited availability of linguistically diverse corpora for non-English languages, which presents a significant barrier in the development of NLP applications. One approach we have been pursuing is the translation of existing English corpora, which would enable us to use the wealth of linguistic resources available in English for other languages. Furthermore, we are looking into other methods, such as gathering language samples from social media platforms. By analyzing the language used in social media posts, we can collect a wide range of data that reflects the unique linguistic characteristics of specific regions and communities, which can then be used to enhance the accuracy and performance of NLP algorithms for non-English languages. In doing so, we hope to broaden the scope and capabilities of NLP applications. Our research focuses on several key NLP techniques including sentiment analysis, text classification, text interpretation and text correction. To ensure that we can achieve the best possible performance for these techniques, we are evaluating and comparing different approaches and strategies for implementing them. We are exploring a range of different methods, including transformers and convolutional neural networks (CNNs), to determine which ones are most effective for different types of NLP tasks. By analyzing the strengths and weaknesses of each approach, we can identify the most effective techniques for specific use cases, and further enhance the performance of our tool. Our research aims to create a tool, which can provide a comprehensive analysis of advertising effectiveness, allowing marketers to identify areas for improvement and optimize their advertising strategies. The results of this study suggest that a smart tool for advertisement analysis can provide valuable insights for businesses seeking to create effective advertising campaigns.

Keywords: NLP, AI, IT, language, marketing, analysis

Procedia PDF Downloads 54
523 Higher Education Benefits and Undocumented Students: An Explanatory Model of Policy Adoption

Authors: Jeremy Ritchey

Abstract:

Undocumented immigrants in the U.S. face many challenges when looking to progress in society, especially when pursuing post-secondary education. The majority of research done on state-level policy adoption pertaining to undocumented higher-education pursuits, specifically in-state resident tuition and financial aid eligibility policies, have framed the discussion on the potential and actual impacts which implementation can and has achieved. What is missing is a model to view the social, political and demographic landscapes upon which such policies (in their various forms) find a route to legislative enactment. This research looks to address this gap in the field by investigating the correlations and significant state-level variables which can be operationalized to construct a framework for adoption of these specific policies. In the process, analysis will show that past unexamined conceptualizations of how such policies come to fruition may be limited or contradictory when compared to available data. Circling on the principles of Policy Innovation and Policy Diffusion theory, this study looks to use variables collected via Michigan State University’s Correlates of State Policy Project, a collectively and ongoing compiled database project centered around annual variables (1900-2016) collected from all 50 states relevant to policy research. Using established variable groupings (demographic, political, social capital measurements, and educational system measurements) from the time period of 2000 to 2014 (2001 being when such policies began), one can see how this data correlates with the adoption of policies related to undocumented students and in-state college tuition. After regression analysis, the results will illuminate which variables appears significant and to what effect, as to help formulate a model upon which to explain when adoption appears to occur and when it does not. Early results have shown that traditionally held conceptions on conservative and liberal identities of the state, as they relate to the likelihood of such policies being adopted, did not fall in line with the collected data. Democratic and liberally identified states were, overall, less likely to adopt pro-undocumented higher education policies than Republican and conservatively identified states and vis versa. While further analysis is needed as to improve the model’s explanatory power, preliminary findings are showing promise in widening our understanding of policy adoption factors in this realm of policies compared to the gap of such knowledge in the publications of the field as it currently exists. The model also looks to serve as an important tool for policymakers in framing such potential policies in a way that is congruent with the relevant state-level determining factors while being sensitive to the most apparent sources of potential friction. While additional variable groups and individual variables will ultimately need to be added and controlled for, this research has already begun to demonstrate how shallow or unexamined reasoning behind policy adoption in the realm of this topic needs to be addressed or else the risk is erroneous conceptions leaking into the foundation of this growing and ever important field.

Keywords: policy adoption, in-state tuition, higher education, undocumented immigrants

Procedia PDF Downloads 86
522 An Interoperability Concept for Detect and Avoid and Collision Avoidance Systems: Results from a Human-In-The-Loop Simulation

Authors: Robert Rorie, Lisa Fern

Abstract:

The integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS) poses a variety of technical challenges to UAS developers and aviation regulators. In response to growing demand for access to civil airspace in the United States, the Federal Aviation Administration (FAA) has produced a roadmap identifying key areas requiring further research and development. One such technical challenge is the development of a ‘detect and avoid’ system (DAA; previously referred to as ‘sense and avoid’) to replace the ‘see and avoid’ requirement in manned aviation. The purpose of the DAA system is to support the pilot, situated at a ground control station (GCS) rather than in the cockpit of the aircraft, in maintaining ‘well clear’ of nearby aircraft through the use of GCS displays and alerts. In addition to its primary function of aiding the pilot in maintaining well clear, the DAA system must also safely interoperate with existing NAS systems and operations, such as the airspace management procedures of air traffic controllers (ATC) and collision avoidance (CA) systems currently in use by manned aircraft, namely the Traffic alert and Collision Avoidance System (TCAS) II. It is anticipated that many UAS architectures will integrate both a DAA system and a TCAS II. It is therefore necessary to explicitly study the integration of DAA and TCAS II alerting structures and maneuver guidance formats to ensure that pilots understand the appropriate type and urgency of their response to the various alerts. This paper presents a concept of interoperability for the two systems. The concept was developed with the goal of avoiding any negative impact on the performance level of TCAS II (understanding that TCAS II must largely be left as-is) while retaining a DAA system that still effectively enables pilots to maintain well clear, and, as a result, successfully reduces the frequency of collision hazards. The interoperability concept described in the paper focuses primarily on facilitating the transition from a late-stage DAA encounter (where a loss of well clear is imminent) to a TCAS II corrective Resolution Advisory (RA), which requires pilot compliance with the directive RA guidance (e.g., climb, descend) within five seconds of its issuance. The interoperability concept was presented to 10 participants (6 active UAS pilots and 4 active commercial pilots) in a medium-fidelity, human-in-the-loop simulation designed to stress different aspects of the DAA and TCAS II systems. Pilot response times, compliance rates and subjective assessments were recorded. Results indicated that pilots exhibited comprehension of, and appropriate prioritization within, the DAA-TCAS II combined alert structure. Pilots demonstrated a high rate of compliance with TCAS II RAs and were also seen to respond to corrective RAs within the five second requirement established for manned aircraft. The DAA system presented under test was also shown to be effective in supporting pilots’ ability to maintain well clear in the overwhelming majority of cases in which pilots had sufficient time to respond. The paper ends with a discussion of next steps for research on integrating UAS into civil airspace.

Keywords: detect and avoid, interoperability, traffic alert and collision avoidance system (TCAS II), unmanned aircraft systems

Procedia PDF Downloads 244
521 Development and Evaluation of Economical Self-cleaning Cement

Authors: Anil Saini, Jatinder Kumar Ratan

Abstract:

Now a day, the key issue for the scientific community is to devise the innovative technologies for sustainable control of urban pollution. In urban cities, a large surface area of the masonry structures, buildings, and pavements is exposed to the open environment, which may be utilized for the control of air pollution, if it is built from the photocatalytically active cement-based constructional materials such as concrete, mortars, paints, and blocks, etc. The photocatalytically active cement is formulated by incorporating a photocatalyst in the cement matrix, and such cement is generally known as self-cleaning cement In the literature, self-cleaning cement has been synthesized by incorporating nanosized-TiO₂ (n-TiO₂) as a photocatalyst in the formulation of the cement. However, the utilization of n-TiO₂ for the formulation of self-cleaning cement has the drawbacks of nano-toxicity, higher cost, and agglomeration as far as the commercial production and applications are concerned. The use of microsized-TiO₂ (m-TiO₂) in place of n-TiO₂ for the commercial manufacture of self-cleaning cement could avoid the above-mentioned problems. However, m-TiO₂ is less photocatalytically active as compared to n- TiO₂ due to smaller surface area, higher band gap, and increased recombination rate. As such, the use of m-TiO₂ in the formulation of self-cleaning cement may lead to a reduction in photocatalytic activity, thus, reducing the self-cleaning, depolluting, and antimicrobial abilities of the resultant cement material. So improvement in the photoactivity of m-TiO₂ based self-cleaning cement is the key issue for its practical applications in the present scenario. The current work proposes the use of surface-fluorinated m-TiO₂ for the formulation of self-cleaning cement to enhance its photocatalytic activity. The calcined dolomite, a constructional material, has also been utilized as co-adsorbent along with the surface-fluorinated m-TiO₂ in the formulation of self-cleaning cement to enhance the photocatalytic performance. The surface-fluorinated m-TiO₂, calcined dolomite, and the formulated self-cleaning cement were characterized using diffuse reflectance spectroscopy (DRS), X-ray diffraction analysis (XRD), field emission-scanning electron microscopy (FE-SEM), energy dispersive x-ray spectroscopy (EDS), X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM), BET (Brunauer–Emmett–Teller) surface area, and energy dispersive X-ray fluorescence spectrometry (EDXRF). The self-cleaning property of the as-prepared self-cleaning cement was evaluated using the methylene blue (MB) test. The depolluting ability of the formulated self-cleaning cement was assessed through a continuous NOX removal test. The antimicrobial activity of the self-cleaning cement was appraised using the method of the zone of inhibition. The as-prepared self-cleaning cement obtained by uniform mixing of 87% clinker, 10% calcined dolomite, and 3% surface-fluorinated m-TiO₂ showed a remarkable self-cleaning property by providing 53.9% degradation of the coated MB dye. The self-cleaning cement also depicted a noteworthy depolluting ability by removing 5.5% of NOx from the air. The inactivation of B. subtiltis bacteria in the presence of light confirmed the significant antimicrobial property of the formulated self-cleaning cement. The self-cleaning, depolluting, and antimicrobial results are attributed to the synergetic effect of surface-fluorinated m-TiO₂ and calcined dolomite in the cement matrix. The present study opens an idea and route for further research for acile and economical formulation of self-cleaning cement.

Keywords: microsized-titanium dioxide (m-TiO₂), self-cleaning cement, photocatalysis, surface-fluorination

Procedia PDF Downloads 136
520 Assessing Spatial Associations of Mortality Patterns in Municipalities of the Czech Republic

Authors: Jitka Rychtarikova

Abstract:

Regional differences in mortality in the Czech Republic (CR) may be moderate from a broader European perspective, but important discrepancies in life expectancy can be found between smaller territorial units. In this study territorial units are based on Administrative Districts of Municipalities with Extended Powers (MEP). This definition came into force January 1, 2003. There are 205 units and the city of Prague. MEP represents the smallest unit for which mortality patterns based on life tables can be investigated and the Czech Statistical Office has been calculating such life tables (every five-years) since 2004. MEP life tables from 2009-2013 for males and females allowed the investigation of three main life cycles with the use of temporary life expectancies between the exact ages of 0 and 35; 35 and 65; and the life expectancy at exact age 65. The results showed regional survival inequalities primarily in adult and older ages. Consequently, only mortality indicators for adult and elderly population were related to census 2011 unlinked data for the same age groups. The most relevant socio-economic factors taken from the census are: having a partner, educational level and unemployment rate. The unemployment rate was measured for adults aged 35-64 completed years. Exploratory spatial data analysis methods were used to detect regional patterns in spatially contiguous units of MEP. The presence of spatial non-stationarity (spatial autocorrelation) of mortality levels for male and female adults (35-64), and elderly males and females (65+) was tested using global Moran’s I. Spatial autocorrelation of mortality patterns was mapped using local Moran’s I with the intention to depict clusters of low or high mortality and spatial outliers for two age groups (35-64 and 65+). The highest Moran’s I was observed for male temporary life expectancy between exact ages 35 and 65 (0.52) and the lowest was among women with life expectancy of 65 (0.26). Generally, men showed stronger spatial autocorrelation compared to women. The relationship between mortality indicators such as life expectancies and socio-economic factors like the percentage of males/females having a partner; percentage of males/females with at least higher secondary education; and percentage of unemployed males/females from economically active population aged 35-64 years, was evaluated using multiple regression (OLS). The results were then compared to outputs from geographically weighted regression (GWR). In the Czech Republic, there are two broader territories North-West Bohemia (NWB) and North Moravia (NM), in which excess mortality is well established. Results of the t-test of spatial regression showed that for males aged 30-64 the association between mortality and unemployment (when adjusted for education and partnership) was stronger in NM compared to NWB, while educational level impacted the length of survival more in NWB. Geographic variation and relationships in mortality of the CR MEP will also be tested using the spatial Durbin approach. The calculations were conducted by means of ArcGIS 10.6 and SAS 9.4.

Keywords: Czech Republic, mortality, municipality, socio-economic factors, spatial analysis

Procedia PDF Downloads 93
519 Consumers Attitude toward the Latest Trends in Decreasing Energy Consumption of Washing Machine

Authors: Farnaz Alborzi, Angelika Schmitz, Rainer Stamminger

Abstract:

Reducing water temperatures in the wash phase of a washing programme and increasing the overall cycle durations are the latest trends in decreasing energy consumption of washing programmes. Since the implementation of the new energy efficiency classes in 2010, manufacturers seem to apply the aforementioned washing strategy with lower temperatures combined with longer programme durations extensively to realise energy-savings needed to meet the requirements of the highest energy efficiency class possible. A semi-representative on-line survey in eleven European countries (Czech Republic, Finland, France, Germany, Hungary, Italy, Poland, Romania, Spain, Sweden and the United Kingdom) was conducted by Bonn University in 2015 to shed light on consumer opinion and behaviour regarding the effects of the lower washing temperature and longer cycle duration in laundry washing on consumers’ acceptance of the programme. The risk of the long wash cycle is that consumers might not use the energy efficient Standard programmes and will think of this option as inconvenient and therefore switch to shorter, but more energy consuming programmes. Furthermore, washing in a lower temperature may lead to the problem of cross-contamination. Washing behaviour of over 5,000 households was studied in this survey to provide support and guidance for manufacturers and policy designers. Qualified households were chosen following a predefined quota: -Involvement in laundry washing: substantial, -Distribution of gender: more than 50 % female , -Selected age groups: -20–39 years, -40–59 years, -60–74 years, -Household size: 1, 2, 3, 4 and more than 4 people. Furthermore, Eurostat data for each country were used to calculate the population distribution in the respective age class and household size as quotas for the consumer survey distribution in each country. Before starting the analyses, the validity of each dataset was controlled with the aid of control questions. After excluding the outlier data, the number of the panel diminished from 5,100 to 4,843. The primary outcome of the study is European consumers are willing to save water and energy in a laundry washing but reluctant to use long programme cycles since they don’t believe that the long cycles could be energy-saving. However, the results of our survey don’t confirm that there is a relation between frequency of using Standard cotton (Eco) or Energy-saving programmes and the duration of the programmes. It might be explained by the fact that the majority of washing programmes used by consumers do not take so long, perhaps consumers just choose some additional time reduction option when selecting those programmes and this finding might be changed if the Energy-saving programmes take longer. Therefore, it may be assumed that introducing the programme duration as a new measure on a revised energy label would strongly influence the consumer at the point of sale. Furthermore, results of the survey confirm that consumers are more willing to use lower temperature programmes in order to save energy than accepting longer programme cycles and majority of them accept deviation from the nominal temperature of the programme as long as the results are good.

Keywords: duration, energy-saving, standard programmes, washing temperature

Procedia PDF Downloads 200
518 Polymer Composites Containing Gold Nanoparticles for Biomedical Use

Authors: Bozena Tyliszczak, Anna Drabczyk, Sonia Kudlacik-Kramarczyk, Agnieszka Sobczak-Kupiec

Abstract:

Introduction: Nanomaterials become one of the leading materials in the synthesis of various compounds. This is a reason for the fact that nano-size materials exhibit other properties compared to their macroscopic equivalents. Such a change in size is reflected in a change in optical, electric or mechanical properties. Among nanomaterials, particular attention is currently directed into gold nanoparticles. They find application in a wide range of areas including cosmetology or pharmacy. Additionally, nanogold may be a component of modern wound dressings, which antibacterial activity is beneficial in the viewpoint of the wound healing process. Specific properties of this type of nanomaterials result in the fact that they may also be applied in cancer treatment. Studies on the development of new techniques of the delivery of drugs are currently an important research subject of many scientists. This is due to the fact that along with the development of such fields of science as medicine or pharmacy, the need for better and more effective methods of administering drugs is constantly growing. The solution may be the use of drug carriers. These are materials that combine with the active substance and lead it directly to the desired place. A role of such a carrier may be played by gold nanoparticles that are able to covalently bond with many organic substances. This allows the combination of nanoparticles with active substances. Therefore gold nanoparticles are widely used in the preparation of nanocomposites that may be used for medical purposes with special emphasis on drug delivery. Methodology: As part of the presented research, synthesis of composites was carried out. The mentioned composites consisted of the polymer matrix and gold nanoparticles that were introduced into the polymer network. The synthesis was conducted with the use of a crosslinking agent, and photoinitiator and the materials were obtained by means of the photopolymerization process. Next, incubation studies were conducted using selected liquids that simulated fluids are occurring in the human body. The study allows determining the biocompatibility of the tested composites in relation to selected environments. Next, the chemical structure of the composites was characterized as well as their sorption properties. Conclusions: Conducted research allowed for the preliminary characterization of prepared polymer composites containing gold nanoparticles in the viewpoint of their application for biomedical use. Tested materials were characterized by biocompatibility in tested environments. What is more, synthesized composites exhibited relatively high swelling capacity that is essential in the viewpoint of their potential application as drug carriers. During such an application, composite swells and at the same time releases from its interior introduced active substance; therefore, it is important to check the swelling ability of such material. Acknowledgements: The authors would like to thank The National Science Centre (Grant no: UMO - 2016/21/D/ST8/01697) for providing financial support to this project. This paper is based upon work from COST Action (CA18113), supported by COST (European Cooperation in Science and Technology).

Keywords: nanocomposites, gold nanoparticles, drug carriers, swelling properties

Procedia PDF Downloads 89
517 The Potential Role of Some Nutrients and Drugs in Providing Protection from Neurotoxicity Induced by Aluminium in Rats

Authors: Azza A. Ali, Abeer I. Abd El-Fattah, Shaimaa S. Hussein, Hanan A. Abd El-Samea, Karema Abu-Elfotuh

Abstract:

Background: Aluminium (Al) represents an environmental risk factor. Exposure to high levels of Al causes neurotoxic effects and different diseases. Vinpocetine is widely used to improve cognitive functions, it possesses memory-protective and memory-enhancing properties and has the ability to increase cerebral blood flow and glucose uptake. Cocoa bean represents a rich source of iron as well as a potent antioxidant. It can protect from the impact of free radicals, reduces stress as well as depression and promotes better memory and concentration. Wheatgrass is primarily used as a concentrated source of nutrients. It contains vitamins, minerals, carbohydrates, amino acids and possesses antioxidant and anti-inflammatory activities. Coenzyme Q10 (CoQ10) is an intracellular antioxidant and mitochondrial membrane stabilizer. It is effective in improving cognitive disorders and has been used as anti-aging. Zinc is a structural element of many proteins and signaling messenger that is released by neural activity at many central excitatory synapses. Objective: To study the role of some nutrients and drugs as Vinpocetine, Cocoa, Wheatgrass, CoQ10 and Zinc against neurotoxicity induced by Al in rats as well as to compare between their potency in providing protection. Methods: Seven groups of rats were used and received daily for three weeks AlCl3 (70 mg/kg, IP) for Al-toxicity model groups except for the control group which received saline. All groups of Al-toxicity model except one group (non-treated) were co-administered orally together with AlCl3 the following treatments; Vinpocetine (20mg/kg), Cocoa powder (24mg/kg), Wheat grass (100mg/kg), CoQ10 (200mg/kg) or Zinc (32mg/kg). Biochemical changes in the rat brain as acetyl cholinesterase (ACHE), Aβ, brain derived neurotrophic factor (BDNF), inflammatory mediators (TNF-α, IL-1β), oxidative parameters (MDA, SOD, TAC) were estimated for all groups besides histopathological examinations in different brain regions. Results: Neurotoxicity and neurodegenerations in the rat brain after three weeks of Al exposure were indicated by the significant increase in Aβ, ACHE, MDA, TNF-α, IL-1β, DNA fragmentation together with the significant decrease in SOD, TAC, BDNF and confirmed by the histopathological changes in the brain. On the other hand, co-administration of each of Vinpocetine, Cocoa, Wheatgrass, CoQ10 or Zinc together with AlCl3 provided protection against hazards of neurotoxicity and neurodegenerations induced by Al, their protection were indicated by the decrease in Aβ, ACHE, MDA, TNF-α, IL-1β, DNA fragmentation together with the increase in SOD, TAC, BDNF and confirmed by the histopathological examinations of different brain regions. Vinpocetine and Cocoa showed the most pronounced protection while Zinc provided the least protective effects than the other used nutrients and drugs. Conclusion: Different degrees of protection from neurotoxicity and neuronal degenerations induced by Al could be achieved through the co-administration of some nutrients and drugs during its exposure. Vinpocetine and Cocoa provided the most protection than Wheat grass, CoQ10 or Zinc which showed the least protective effects.

Keywords: aluminum, neurotoxicity, vinpocetine, cocoa, wheat grass, coenzyme Q10, Zinc, rats

Procedia PDF Downloads 219
516 The Effects of in vitro Digestion on Cheese Bioactivity; Comparing Adult and Elderly Simulated in vitro Gastrointestinal Digestion Models

Authors: A. M. Plante, F. O’Halloran, A. L. McCarthy

Abstract:

By 2050 it is projected that 2 billion of the global population will be more than 60 years old. Older adults have unique dietary requirements and aging is associated with physiological changes that affect appetite, sensory perception, metabolism, and digestion. Therefore, it is essential that foods recommended and designed for older adults promote healthy aging. To assess cheese as a functional food for the elderly, a range of commercial cheese products were selected and compared for their antioxidant properties. Cheese from various milk sources (bovine, goats, sheep) with different textures and fat content, including cheddar, feta, goats, brie, roquefort, halloumi, wensleydale and gouda, were initially digested with two different simulated in vitro gastrointestinal digestion (SGID) models. One SGID model represented a validated in vitro adult digestion system and the second model, an elderly SGID, was designed to consider the physiological changes associated with aging. The antioxidant potential of all cheese digestates was investigated using in vitro chemical-based antioxidant assays, (2,2-Diphenyl-1-picrylhydrazyl (DPPH) radical scavenging, ferric reducing antioxidant power (FRAP) and total phenolic content (TPC)). All adult model digestates had high antioxidant activity across both DPPH ( > 70%) and FRAP ( > 700 µM Fe²⁺/kg.fw) assays. Following in vitro digestion using the elderly SGID model, full-fat red cheddar, low-fat white cheddar, roquefort, halloumi, wensleydale, and gouda digestates had significantly lower (p ≤ 0.05) DPPH radical scavenging properties compared to the adult model digestates. Full-fat white cheddar had higher DPPH radical scavenging activity following elderly SGID digestion compared to the adult model digestate, but the difference was not significant. All other cheese digestates from the elderly model were comparable to the digestates from the adult model in terms of radical scavenging activity. The FRAP of all elderly digestates were significantly lower (p ≤ 0.05) compared to the adult digestates. Goats cheese was significantly higher (p ≤ 0.05) in FRAP (718 µM Fe²/kg.fw) compared to all other digestates in the elderly model. TPC levels in the soft cheeses (feta, goats) and low-fat cheeses (red cheddar, white cheddar) were significantly lower (p ≤ 0.05) in the elderly digestates compared to the adult digestates. There was no significant difference in TPC levels, between the elderly and adult model for full-fat cheddar (red, white), roquefort, wensleydale, gouda, and brie digestates. Halloumi cheese was the only cheese that was significantly higher in TPC levels following elderly digestion compared to adult digestates. Low fat red cheddar had significantly higher (p ≤ 0.05) TPC levels compared to all other digestates for both adult and elderly digestive systems. Findings from this study demonstrate that aging has an impact on the bioactivity of cheese, as antioxidant activity and TPC levels were lower, following in vitro elderly digestion compared to the adult model. For older adults, soft cheese, particularly goats cheese, was associated with high radical scavenging and reducing power, while roquefort cheese had low antioxidant activity. Also, elderly digestates of halloumi and low-fat red cheddar were associated with high TPC levels. Cheese has potential as a functional food for the elderly, however, bioactivity can vary depending on the cheese matrix. Funding for this research was provided by the RISAM Scholarship Scheme, Cork Institute of Technology, Ireland.

Keywords: antioxidants, cheese, in-vitro digestion, older adults

Procedia PDF Downloads 197
515 Encapsulation of Venlafaxine-Dowex® Resinate: A Once Daily Multiple Unit Formulation

Authors: Salwa Mohamed Salah Eldin, Howida Kamal Ibrahim

Abstract:

Introduction: Major depressive disorder affects high proportion of the world’s population presenting cost load in health care. Extended release venlafaxine is more convenient and could reduce discontinuation syndrome. The once daily dosing also reduces the potential for adverse events such as nausea due to reduced Cmax. Venlafaxine is an effective first-line agent in the treatment of depression. A once daily formulation was designed to enhance patient compliance. Complexing with a resin was suggested to improve loading of the water soluble drug. The formulated systems were thoroughly evaluated in vitro to prove superiority to previous trials and were compared to the commercial extended release product in experimental animals. Materials and Methods: Venlafaxine-resinates were prepared using Dowex®50WX4-400 and Dowex®50WX8-100 at drug to resin weight ratio of 1: 1. The prepared resinates were evaluated for their drug content, particle shape and surface properties and in vitro release profile in gradient pH. The release kinetics and mechanism were evaluated. Venlafaxine-Dowex® resinates were encapsulated using O/W solvent evaporation technique. Poly-ε-caprolactone, Poly(D, L-lactide-co-glycolide) ester, Poly(D, L-lactide) ester and Eudragit®RS100 were used as coating polymers alone and in combination. Drug-resinate microcapsules were evaluated for morphology, entrapment efficiency and in-vitro release profile. The selected formula was tested in rabbits using a randomized, single-dose, 2-way crossover study against Effexor-XR tablets under fasting condition. Results and Discussion: The equilibrium time was 30 min for Dowex®50WX4-400 and 90 min for Dowex®50WX8-100. The percentage drug loaded was 93.96 and 83.56% for both resins, respectively. Both drug-Dowex® resintes were efficient in sustaining venlafaxine release in comparison to the free drug (up to 8h.). Dowex®50WX4-400 based venlafaxine-resinate was selected for further encapsulation to optimize the release profile for once daily dosing and to lower the burst effect. The selected formula (coated with a mixture of Eudragit RS and PLGA in a ratio of 50/50) was chosen by applying a group of mathematical equations according to targeted values. It recorded the minimum burst effect, the maximum MDT (Mean dissolution time) and a Q24h (percentage drug released after 24 hours) between 95 and 100%. The 90% confidence intervals for the test/reference mean ratio of the log-transformed data of AUC0–24 and AUC0−∞ are within (0.8–1.25), which satisfies the bioequivalence criteria. Conclusion: The optimized formula could be a promising extended release form of the water soluble, short half lived venlafaxine. Being a multiple unit formulation, it lowers the probability of dose dumping and reduces the inter-subject variability in absorption.

Keywords: biodegradable polymers, cation-exchange resin, microencapsulation, venlafaxine hcl

Procedia PDF Downloads 374
514 Impact of Stress and Protein Malnutrition on the Potential Role of Epigallocatechin-3-Gallate in Providing Protection from Nephrotoxicity and Hepatotoxicity Induced by Aluminum in Rats

Authors: Azza A. Ali, Mona G. Khalil, Hemat A. Elariny, Shereen S. El Shaer

Abstract:

Background: Aluminium (Al) is very abundant metal in the earth’s crust. It is a constituent of cooking utensils, medicines, cosmetics, some foods and food additives. Salts of Al are widely used in the treatment of drinking water for purification purposes. Excessive and prolonged exposure to Al causes oxidative stress and impairment of many physiological functions. Its accumulation in liver and kidney causes hepatotoxicity and nephrotoxicity. Social isolation (SI) or Protein malnutrition (PM) also increases oxidative stress and may enhance the toxicity of Al as well as the degeneration in liver and kidney. Epigallocatechin-3-gallate (EGCG) is the most abundant catechin in green tea and has strong antioxidant as well as anti-inflammatory activities and can protect against oxidative stress-induced degenerations. Objective: To study the influence of stress or PM on Al-induced nephrotoxicity and hepatotoxicity in rats, as well as on the potential role of EGCG in providing protection. Methods: Rats received daily AlCl3 (70 mg/kg, IP) for three weeks (Al-toxicity groups) except one normal control group received saline. Al-toxicity groups were divided into four treated and four untreated groups; treated rats received EGCG (10 mg/kg, IP) together with AlCl3. One group of both treated and untreated rats served as control for each of them, and the others were subjected to either stress (mild using isolation or high using electric shock) or to PM (10% casein diet). Specimens of liver and kidney were used for assessment of levels of inflammatory mediators as TNF-α, IL6β, nuclear factor kappa B (NF-κB), oxidative stress (MDA, SOD, TAC, NO), Caspase-3 and for DNA fragmentation as well as for histopathological examinations. Biochemical changes were also measured in the serum as total lipids, cholesterol, triglycerides, glucose, proteins, bilirubin, creatinine and urea as well as the level of Alanine aminotransferase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP) and lactate deshydrogenase (LDH). Results: Nephrotoxicity and hepatotoxicity induced by Al were enhanced in rats exposed to stress and to PM. The influence of stress was more pronounced than PM. Al-toxicity was indicated by the increase in liver and kidney MDA, NO, TNF-α, IL-6β, NF-κB, caspase-3, DNA fragmentation and in ALT, AST, ALP, LDH and total lipids, cholesterol, triglycerides, glucose, proteins, bilirubin, creatinine and urea levels, together with the decrease in total proteins, SOD, TAC. EGCG provided protection against hazards of Al as indicated by the decrease in MDA, NO, TNF-α, IL-6β, NF-κB, caspase-3 and DNA fragmentation as well as in levels of ALT, AST, ALP, LDH and total lipids, cholesterol, triglycerides, glucose, proteins, bilirubin, creatinine and urea in liver and kidney, together with the increase in total proteins, SOD, TAC and confirmed by histopathological examinations. It provided more pronounced protection in high stressful conditions than in mild one than in PM. Conclusion: Stress have a bad impact on Al-induced nephrotoxicity and hepatotoxicity more than PM. Thus it can clarify and maximize the role of EGCG in providing protection. Consequently, administration of EGCG is advised with excessive Al-exposure to avoid nephrotoxicity and hepatotoxicity especially in populations more subjected to stress or PM.

Keywords: aluminum, stress, protein malnutrition, nephrotoxicity, hepatotoxicity, epigallocatechin-3-gallate, rats

Procedia PDF Downloads 290
513 Clinical Application of Measurement of Eyeball Movement for Diagnose of Autism

Authors: Ippei Torii, Kaoruko Ohtani, Takahito Niwa, Naohiro Ishii

Abstract:

This paper shows developing an objectivity index using the measurement of subtle eyeball movement to diagnose autism. The developmentally disabled assessment varies, and the diagnosis depends on the subjective judgment of professionals. Therefore, a supplementary inspection method that will enable anyone to obtain the same quantitative judgment is needed. The diagnosis are made based on a comparison of the time of gazing an object in the conventional autistic study, but the results do not match. First, we divided the pupil into four parts from the center using measurements of subtle eyeball movement and comparing the number of pixels in the overlapping parts based on an afterimage. Then we developed the objective evaluation indicator to judge non-autistic and autistic people more clearly than conventional methods by analyzing the differences of subtle eyeball movements between the right and left eyes. Even when a person gazes at one point and his/her eyeballs always stay fixed at that point, their eyes perform subtle fixating movements (ie. tremors, drifting, microsaccades) to keep the retinal image clear. Particularly, the microsaccades link with nerves and reflect the mechanism that process the sight in a brain. We converted the differences between these movements into numbers. The process of the conversion is as followed: 1) Select the pixel indicating the subject's pupil from images of captured frames. 2) Set up a reference image, known as an afterimage, from the pixel indicating the subject's pupil. 3) Divide the pupil of the subject into four from the center in the acquired frame image. 4) Select the pixel in each divided part and count the number of the pixels of the overlapping part with the present pixel based on the afterimage. 5) Process the images with precision in 24 - 30fps from a camera and convert the amount of change in the pixels of the subtle movements of the right and left eyeballs in to numbers. The difference in the area of the amount of change occurs by measuring the difference between the afterimage in consecutive frames and the present frame. We set the amount of change to the quantity of the subtle eyeball movements. This method made it possible to detect a change of the eyeball vibration in numerical value. By comparing the numerical value between the right and left eyes, we found that there is a difference in how much they move. We compared the difference in these movements between non-autistc and autistic people and analyzed the result. Our research subjects consists of 8 children and 10 adults with autism, and 6 children and 18 adults with no disability. We measured the values through pasuit movements and fixations. We converted the difference in subtle movements between the right and left eyes into a graph and define it in multidimensional measure. Then we set the identification border with density function of the distribution, cumulative frequency function, and ROC curve. With this, we established an objective index to determine autism, normal, false positive, and false negative.

Keywords: subtle eyeball movement, autism, microsaccade, pursuit eye movements, ROC curve

Procedia PDF Downloads 255
512 Comparative Evaluation of High Pure Mn3O4 Preparation Technique between the Conventional Process from Electrolytic Manganese and a Sustainable Approach Directly from Low-Grade Rhodochrosite

Authors: Fang Lian, Zefang Chenli, Laijun Ma, Lei Mao

Abstract:

Up to now, electrolytic process is a popular way to prepare Mn and MnO2 (EMD) with high purity. However, the conventional preparation process of manganese oxide such as Mn3O4 with high purity from electrolytic manganese metal is characterized by long production-cycle, high-pollution discharge and high energy consumption especially initially from low-grade rhodochrosite, the main resources for exploitation and applications in China. Moreover, Mn3O4 prepared from electrolytic manganese shows large particles, single morphology beyond the control and weak chemical activity. On the other hand, hydrometallurgical method combined with thermal decomposition, hydrothermal synthesis and sol-gel processes has been widely studied because of its high efficiency, low consumption and low cost. But the key problem in direct preparation of manganese oxide series from low-grade rhodochrosite is to remove completely the multiple impurities such as iron, silicon, calcium and magnesium. It is urgent to develop a sustainable approach to high pure manganese oxide series with character of short process, high efficiency, environmentally friendly and economical benefit. In our work, the preparation technique of high pure Mn3O4 directly from low-grade rhodochrosite ore (13.86%) was studied and improved intensively, including the effective leaching process and the short purifying process. Based on the same ion effect, the repeated leaching of rhodochrosite with sulfuric acid is proposed to improve the solubility of Mn2+ and inhibit the dissolution of the impurities Ca2+ and Mg2+. Moreover, the repeated leaching process could make full use of sulfuric acid and lower the cost of the raw material. With the aid of theoretical calculation, Ba(OH)2 was chosen to adjust the pH value of manganese sulfate solution and BaF2 to remove Ca2+ and Mg2+ completely in the process of purifying. Herein, the recovery ratio of manganese and removal ratio of the impurity were evaluated via chemical titration and ICP analysis, respectively. Comparison between conventional preparation technique from electrolytic manganese and a sustainable approach directly from low-grade rhodochrosite have also been done herein. The results demonstrate that the extraction ratio and the recovery ratio of manganese reached 94.3% and 92.7%, respectively. The heavy metal impurities has been decreased to less than 1ppm, and the content of calcium, magnesium and sodium has been decreased to less than 20ppm, which meet standards of high pure reagent for energy and electronic materials. In compare with conventional technique from electrolytic manganese, the power consumption has been reduced to ≤2000 kWh/t(product) in our short-process approach. Moreover, comprehensive recovery rate of manganese increases significantly, and the wastewater generated from our short-process approach contains low content of ammonia/ nitrogen about 500 mg/t(product) and no toxic emissions. Our study contributes to the sustainable application of low-grade manganese ore. Acknowledgements: The authors are grateful to the National Science and Technology Support Program of China (No.2015BAB01B02) for financial support to the work.

Keywords: leaching, high purity, low-grade rhodochrosite, manganese oxide, purifying process, recovery ratio

Procedia PDF Downloads 213
511 Psychodiagnostic Tool Development for Measurement of Social Responsibility in Ukrainian Organizations

Authors: Olena Kovalchuk

Abstract:

How to define the understanding of social responsibility issues by Ukrainian companies is a contravention question. Thus, one of the practical uses of social responsibility is a diagnostic tool development for educational, business or scientific purposes. So the purpose of this research is to develop a tool for measurement of social responsibility in organization. Methodology: A 21-item questionnaire “Organization Social Responsibility Scale” was developed. This tool was adapted for the Ukrainian sample and based on the questionnaire “Perceived Role of Ethics and Social Responsibility” which connects ethical and socially responsible behavior to different aspects of the organizational effectiveness. After surveying the respondents, the factor analysis was made by the method of main compounds with orthogonal rotation VARIMAX. On the basis of the obtained results the 21-item questionnaire was developed (Cronbach’s alpha – 0,768; Inter-Item Correlations – 0,34). Participants: 121 managers at all levels of Ukrainian organizations (57 males; 65 females) took part in the research. Results: Factor analysis showed five ethical dilemmas concerning the social responsibility and profit compatibility in Ukrainian organizations. Below we made an attempt to interpret them: — Social responsibility vs profit. Corporate social responsibility can be a way to reduce operational costs. A firm’s first priority is employees’ morale. Being ethical and socially responsible is the priority of the organization. The most loaded question is "Corporate social responsibility can reduce operational costs". Significant effect of this factor is 0.768. — Profit vs social responsibility. Efficiency is much more important to a firm than ethics or social responsibility. Making the profit is the most important concern for a firm. The dominant question is "Efficiency is much more important to a firm than whether or not the firm is seen as ethical or socially responsible". Significant effect of this factor is 0.793. — A balanced combination of social responsibility and profit. Organization with social responsibility policy is more attractive for its stakeholders. The most loaded question is "Social responsibility and profitability can be compatible". Significant effect of this factor is 0.802. — Role of Social Responsibility in the successful organizational performance. Understanding the value of social responsibility and business ethics. Well-being and welfare of the society. The dominant question is "Good ethics is often good business". Significant effect of this factor is 0.727. — Global vision of social responsibility. Issues related to global social responsibility and sustainability. Innovative approaches to poverty reduction. Awareness of climate change problems. Global vision for successful business. The dominant question is "The overall effectiveness of a business can be determined to a great extent by the degree to which it is ethical and socially responsible". Significant effect of this factor is 0.842. The theoretical contribution. The perspective of the study is to develop a tool for measurement social responsibility in organizations and to test questionnaire’s adequacy for social and cultural context. Practical implications. The research results can be applied for designing a training programme for business school students to form their global vision for successful business as well as the ability to solve ethical dilemmas in managerial practice. Researchers interested in social responsibility issues are welcome to join the project.

Keywords: corporate social responsibility, Cronbach’s alpha, ethical behaviour, psychodiagnostic tool

Procedia PDF Downloads 334
510 Life-Saving Design Strategies for Nursing Homes and Long-Term Care Facilities

Authors: Jason M. Hegenauer, Nicholas Fucci

Abstract:

In the late 1990s, a major deinstitutionalization movement of elderly patients took place, since which, the design of long-term care facilities has not been adequately analyzed in the United States. Over the course of the last 25 years, major innovations in construction methods, technology, and medicine have been developed, drastically changing the landscape of healthcare architecture. In light of recent events, and the expected increase in elderly populations with the aging of the baby-boomer generation, it is evident that reconsideration of these facilities is essential for the proper care of aging populations. The global response has been effective in stifling this pandemic; however, widespread disease still poses an imminent threat to the human race. Having witnessed the devastation Covid-19 has reaped throughout nursing homes and long-term care facilities, it is evident that the current strategies for protecting our most vulnerable populations are not enough. Light renovation of existing facilities and previously overlooked considerations for new construction projects can drastically lower the risk at nursing homes and long-term care facilities. A reconfigured entry sequence supplements several of the features which have been long-standing essentials of the design of these facilities. This research focuses on several aspects identified as needing improvement, including indoor environment quality, security measures incorporated into healthcare architecture and design, and architectural mitigation strategies for sick building syndrome. The results of this study have been compiled as 'best practices' for the design of future healthcare construction projects focused on the health, safety, and quality of life of the residents of these facilities. These design strategies, which can easily be implemented through renovation of existing facilities and new construction projects, minimize risk of infection and spread of disease while allowing routine functions to continue with minimal impact, should the need for future lockdowns arise. Through the current lockdown procedures, which were implemented during the Covid-19 pandemic, isolation of residents has caused great unrest and worry for family members and friends as they are cut off from their loved ones. At this time, data is still being reported, leaving infection and death rates inconclusive; however, recent projections in some states list long-term care facility deaths as high as 60% of all deaths in the state. The population of these facilities consists of residents who are elderly, immunocompromised, and have underlying chronic medical conditions. According to the Centers for Disease Control, these populations are particularly susceptible to infection and serious illness. The obligation to protect our most vulnerable population cannot be overlooked, and the harsh measures recently taken as a response to the Covid-19 pandemic prove that the design strategies currently utilized for doing so are inadequate.

Keywords: building security, healthcare architecture and design, indoor environment quality, new construction, sick building syndrome, renovation

Procedia PDF Downloads 69
509 Effect of Polymer Coated Urea on Nutrient Efficiency and Nitrate Leaching Using Maize and Annual Ryegrass

Authors: Amrei Voelkner, Nils Peters, Thomas Mannheim

Abstract:

The worldwide exponential growth of the population and the simultaneous increasing food production requires the strategic realization of sustainable and improved cultivation systems to ensure the fertility of arable land and to guarantee the food supply for the whole world. To fulfill this target, large quantities of fertilizers have to be applied to the field, but the long-term environmental impacts remain uncertain. Thus, a combined system would be necessary to increase the nutrient availability for plants while reducing nutrient losses (e.g. NO3- by leaching) to the environment. To enhance the nutrient efficiency, polymer coated fertilizer with a controlled release behavior have been developed. This kind of fertilizer ensures a delayed release of nutrients to synchronize the nutrient supply with the demand of different crops. In the last decades, research focused primarily on semi-permeable polyurethane coatings, which remain in the soil for a long period after the complete solvation of the fertilizer core. Within the implementation of the new European Regulation Directive the replacement of non-degradable synthetic polymers by degradable coatings is necessary. It was, therefore, the objective of this study to develop a total biodegradable polymer (to CO2 and H2O) coating according to ISO 17556 and to compare the retarding effect of the biodegradable coatings with commercially available non-degradable products. To investigate the effect of ten selected coated urea fertilizer on the yield of annual ryegrass and maize, the fresh and dry mass, the percentage of total nitrogen and main nutrients were analyzed in greenhouse experiments in sixfold replications using near-infrared spectroscopy. For the experiments, a homogenized and air-dried loamy sand (Cambic Luvisol) was equipped with a basic fertilization of P, K, Mg and S. To investigate the effect of nitrogen level increase, three levels (80%, 100%, 120%) were established, whereas the impact of CRF granules was determined using a N-level of 100%. Additionally, leaching of NO3- from pots planted with annual ryegrass was examined to evaluate the retention capacity of urea by the polymer coating. For this, leachate from Kick-Brauckmann-Pots was collected daily and analyzed for total nitrogen, NO3- and NH4+ in twofold repetition once a week using near-infrared spectroscopy. We summarize from the results that the coated fertilizer have a clear impact on the yield of annual ryegrass and maize. Compared to the control, an increase of fresh and dry mass could be recognized. Partially, the non-degradable coatings showed a retarding effect for a longer period, which was however reflected by a lower fresh and dry mass. It was ascertained that the percentage of leached-out nitrate could be reduced markedly. As a conclusion, it could be pointed out that the impact of coated fertilizer of all polymer types might contribute to a reduction of negative environmental impacts in addition to their fertilizing effect.

Keywords: biodegradable polymers, coating, enhanced efficiency fertilizers, nitrate leaching

Procedia PDF Downloads 248
508 Policy Views of Sustainable Integrated Solution for Increased Synergy between Light Railways and Electrical Distribution Network

Authors: Mansoureh Zangiabadi, Shamil Velji, Rajendra Kelkar, Neal Wade, Volker Pickert

Abstract:

The EU has set itself a long-term goal of reducing greenhouse gas emissions by 80-95% of the 1990 levels by 2050 as set in the Energy Roadmap 2050. This paper reports on the European Union H2020 funded E-Lobster project which demonstrates tools and technologies, software and hardware in integrating the grid distribution, and the railway power systems with power electronics technologies (Smart Soft Open Point - sSOP) and local energy storage. In this context this paper describes the existing policies and regulatory frameworks of the energy market at European level with a special focus then at National level, on the countries where the members of the consortium are located, and where the demonstration activities will be implemented. By taking into account the disciplinary approach of E-Lobster, the main policy areas investigated includes electricity, energy market, energy efficiency, transport and smart cities. Energy storage will play a key role in enabling the EU to develop a low-carbon electricity system. In recent years, Energy Storage System (ESSs) are gaining importance due to emerging applications, especially electrification of the transportation sector and grid integration of volatile renewables. The need for storage systems led to ESS technologies performance improvements and significant price decline. This allows for opening a new market where ESSs can be a reliable and economical solution. One such emerging market for ESS is R+G management which will be investigated and demonstrated within E-Lobster project. The surplus of energy in one type of power system (e.g., due to metro braking) might be directly transferred to the other power system (or vice versa). However, it would usually happen at unfavourable instances when the recipient does not need additional power. Thus, the role of ESS is to enhance advantages coming from interconnection of the railway power systems and distribution grids by offering additional energy buffer. Consequently, the surplus/deficit of energy in, e.g. railway power systems, is not to be immediately transferred to/from the distribution grid but it could be stored and used when it is really needed. This will assure better energy management exchange between the railway power systems and distribution grids and lead to more efficient loss reduction. In this framework, to identify the existing policies and regulatory frameworks is crucial for the project activities and for the future development of business models for the E-Lobster solutions. The projections carried out by the European Commission, the Member States and stakeholders and their analysis indicated some trends, challenges, opportunities and structural changes needed to design the policy measures to provide the appropriate framework for investors. This study will be used as reference for the discussion in the envisaged workshops with stakeholders (DSOs and Transport Managers) in the E-Lobster project.

Keywords: light railway, electrical distribution network, Electrical Energy Storage, policy

Procedia PDF Downloads 109
507 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators

Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy

Abstract:

Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.

Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators

Procedia PDF Downloads 89
506 Analysis of Taxonomic Compositions, Metabolic Pathways and Antibiotic Resistance Genes in Fish Gut Microbiome by Shotgun Metagenomics

Authors: Anuj Tyagi, Balwinder Singh, Naveen Kumar B. T., Niraj K. Singh

Abstract:

Characterization of diverse microbial communities in specific environment plays a crucial role in the better understanding of their functional relationship with the ecosystem. It is now well established that gut microbiome of fish is not the simple replication of microbiota of surrounding local habitat, and extensive species, dietary, physiological and metabolic variations in fishes may have a significant impact on its composition. Moreover, overuse of antibiotics in human, veterinary and aquaculture medicine has led to rapid emergence and propagation of antibiotic resistance genes (ARGs) in the aquatic environment. Microbial communities harboring specific ARGs not only get a preferential edge during selective antibiotic exposure but also possess the significant risk of ARGs transfer to other non-resistance bacteria within the confined environments. This phenomenon may lead to the emergence of habitat-specific microbial resistomes and subsequent emergence of virulent antibiotic-resistant pathogens with severe fish and consumer health consequences. In this study, gut microbiota of freshwater carp (Labeo rohita) was investigated by shotgun metagenomics to understand its taxonomic composition and functional capabilities. Metagenomic DNA, extracted from the fish gut, was subjected to sequencing on Illumina NextSeq to generate paired-end (PE) 2 x 150 bp sequencing reads. After the QC of raw sequencing data by Trimmomatic, taxonomic analysis by Kraken2 taxonomic sequence classification system revealed the presence of 36 phyla, 326 families and 985 genera in the fish gut microbiome. At phylum level, Proteobacteria accounted for more than three-fourths of total bacterial populations followed by Actinobacteria (14%) and Cyanobacteria (3%). Commonly used probiotic bacteria (Bacillus, Lactobacillus, Streptococcus, and Lactococcus) were found to be very less prevalent in fish gut. After sequencing data assembly by MEGAHIT v1.1.2 assembler and PROKKA automated analysis pipeline, pathway analysis revealed the presence of 1,608 Metacyc pathways in the fish gut microbiome. Biosynthesis pathways were found to be the most dominant (51%) followed by degradation (39%), energy-metabolism (4%) and fermentation (2%). Almost one-third (33%) of biosynthesis pathways were involved in the synthesis of secondary metabolites. Metabolic pathways for the biosynthesis of 35 antibiotic types were also present, and these accounted for 5% of overall metabolic pathways in the fish gut microbiome. Fifty-one different types of antibiotic resistance genes (ARGs) belonging to 15 antimicrobial resistance (AMR) gene families and conferring resistance against 24 antibiotic types were detected in fish gut. More than 90% ARGs in fish gut microbiome were against beta-lactams (penicillins, cephalosporins, penems, and monobactams). Resistance against tetracycline, macrolides, fluoroquinolones, and phenicols ranged from 0.7% to 1.3%. Some of the ARGs for multi-drug resistance were also found to be located on sequences of plasmid origin. The presence of pathogenic bacteria and ARGs on plasmid sequences suggested the potential risk due to horizontal gene transfer in the confined gut environment.

Keywords: antibiotic resistance, fish gut, metabolic pathways, microbial diversity

Procedia PDF Downloads 111
505 Morphotropic Phase Boundary in Ferromagnets: Unusual Magnetoelastic Behavior In Tb₁₋ₓNdₓCo₂

Authors: Adil Murtaza, Muhammad Tahir Khan, Awais Ghani, Chao Zhou, Sen Yang, Xiaoping Song

Abstract:

The morphotropic phase boundary (MPB); a boundary between two different crystallographic symmetries in the composition–temperature phase diagram has been widely studied in ferroelectrics and recently has drawn interest in ferromagnets for obtaining enhanced large field-induced strain. At MPB, the system gets a compressed free energy state, which allows the polarization to freely rotate and hence results in a high magnetoelastic response (e.g., high magnetization, low coercivity, and large magnetostriction). Based on the same mechanism, we designed MPB in a ferromagnetic Tb₁₋ₓNdₓCo₂ system. The temperature-dependent magnetization curves showed spin reorientation (SR); which can be explained by a two-sublattice model. Contrary to previously reported MPB involved ferromagnetic systems, the MPB composition of Tb₀.₃₅Nd₀.₆₅Co₂ exhibits a low saturation magnetization (MS), indicating a compensation of the Tb and Nd magnetic moments at MPB. The coercive field (HC) under a low magnetic field and first anisotropy constant (K₁) shows a minimum value at MPB composition of x=0.65. A detailed spin configuration diagram is provided for the Tb₁₋ₓNdₓCo₂ around the composition for the anisotropy compensation; this can guide the development of novel magnetostrictive materials. The anisotropic magnetostriction (λS) first decreased until x=0.8 and then continuously increased in the negative direction with further increase of Nd concentration. In addition, the large ratio between magnetostriction and the absolute values of the first anisotropy constant (λS/K₁) appears at MPB, indicating that Tb₀.₃₅Nd₀.₆₅Co₂ has good magnetostrictive properties. Present work shows an anomalous type of MPB in ferromagnetic materials, revealing that MPB can also lead to a weakening of magnetoelastic behavior as shown in the ferromagnetic Tb₁₋ₓNdₓCo₂ system. Our work shows the universal presence of MPB in ferromagnetic materials and suggests the differences between different ferromagnetic MPB systems that are important for substantial improvement of magnetic and magnetostrictive properties. Based on the results of this study, similar MPB effects might be achieved in other ferroic systems that can be used for technological applications. The finding of magnetic MPB in the ferromagnetic system leads to some important significances. First, it provides a better understanding of the fundamental concept of spin reorientation transitions (SRT) like ferro-ferro transitions are not only reorientation of magnetization but also crystal symmetry change upon magnetic ordering. Second, the flattened free energy corresponding to a low energy barrier for magnetization rotation and enhanced magnetoelastic response near MPB. Third, to attain large magnetostriction with MPB approach two terminal compounds have different easy magnetization directions below Curie temperature Tc in order to accomplish the weakening of magnetization anisotropy at MPB (as in ferroelectrics), thus easing the magnetic domain switching and the lattice distortion difference between two terminal compounds should be large enough, e.g., lattice distortion of R symmetry ˃˃ lattice distortion of T symmetry). So that the MPB composition agrees to a nearly isotropic state along with large ‘net’ lattice distortion, which is revealed in a higher value of magnetostriction.

Keywords: magnetization, magnetostriction, morphotropic phase boundary (MPB), phase transition

Procedia PDF Downloads 116
504 Deep Learning Framework for Predicting Bus Travel Times with Multiple Bus Routes: A Single-Step Multi-Station Forecasting Approach

Authors: Muhammad Ahnaf Zahin, Yaw Adu-Gyamfi

Abstract:

Bus transit is a crucial component of transportation networks, especially in urban areas. Any intelligent transportation system must have accurate real-time information on bus travel times since it minimizes waiting times for passengers at different stations along a route, improves service reliability, and significantly optimizes travel patterns. Bus agencies must enhance the quality of their information service to serve their passengers better and draw in more travelers since people waiting at bus stops are frequently anxious about when the bus will arrive at their starting point and when it will reach their destination. For solving this issue, different models have been developed for predicting bus travel times recently, but most of them are focused on smaller road networks due to their relatively subpar performance in high-density urban areas on a vast network. This paper develops a deep learning-based architecture using a single-step multi-station forecasting approach to predict average bus travel times for numerous routes, stops, and trips on a large-scale network using heterogeneous bus transit data collected from the GTFS database. Over one week, data was gathered from multiple bus routes in Saint Louis, Missouri. In this study, Gated Recurrent Unit (GRU) neural network was followed to predict the mean vehicle travel times for different hours of the day for multiple stations along multiple routes. Historical time steps and prediction horizon were set up to 5 and 1, respectively, which means that five hours of historical average travel time data were used to predict average travel time for the following hour. The spatial and temporal information and the historical average travel times were captured from the dataset for model input parameters. As adjacency matrices for the spatial input parameters, the station distances and sequence numbers were used, and the time of day (hour) was considered for the temporal inputs. Other inputs, including volatility information such as standard deviation and variance of journey durations, were also included in the model to make it more robust. The model's performance was evaluated based on a metric called mean absolute percentage error (MAPE). The observed prediction errors for various routes, trips, and stations remained consistent throughout the day. The results showed that the developed model could predict travel times more accurately during peak traffic hours, having a MAPE of around 14%, and performed less accurately during the latter part of the day. In the context of a complicated transportation network in high-density urban areas, the model showed its applicability for real-time travel time prediction of public transportation and ensured the high quality of the predictions generated by the model.

Keywords: gated recurrent unit, mean absolute percentage error, single-step forecasting, travel time prediction.

Procedia PDF Downloads 47
503 Inclusion Body Refolding at High Concentration for Large-Scale Applications

Authors: J. Gabrielczyk, J. Kluitmann, T. Dammeyer, H. J. Jördening

Abstract:

High-level expression of proteins in bacteria often causes production of insoluble protein aggregates, called inclusion bodies (IB). They contain mainly one type of protein and offer an easy and efficient way to get purified protein. On the other hand, proteins in IB are normally devoid of function and therefore need a special treatment to become active. Most refolding techniques aim at diluting the solubilizing chaotropic agents. Unfortunately, optimal refolding conditions have to be found empirically for every protein. For large-scale applications, a simple refolding process with high yields and high final enzyme concentrations is still missing. The constructed plasmid pASK-IBA63b containing the sequence of fructosyltransferase (FTF, EC 2.4.1.162) from Bacillus subtilis NCIMB 11871 was transformed into E. coli BL21 (DE3) Rosetta. The bacterium was cultivated in a fed-batch bioreactor. The produced FTF was obtained mainly as IB. For refolding experiments, five different amounts of IBs were solubilized in urea buffer with protein concentration of 0.2-8.5 g/L. Solubilizates were refolded with batch or continuous dialysis. The refolding yield was determined by measuring the protein concentration of the clear supernatant before and after the dialysis. Particle size was measured by dynamic light scattering. We tested the solubilization properties of fructosyltransferase IBs. The particle size measurements revealed that the solubilization of the aggregates is achieved at urea concentration of 5M or higher and confirmed by absorption spectroscopy. All results confirm previous investigations that refolding yields are dependent upon initial protein concentration. In batch dialysis, the yields dropped from 67% to 12% and 72% to 19% for continuous dialysis, in relation to initial concentrations from 0.2 to 8.5 g/L. Often used additives such as sucrose and glycerol had no effect on refolding yields. Buffer screening indicated a significant increase in activity but also temperature stability of FTF with citrate/phosphate buffer. By adding citrate to the dialysis buffer, we were able to increase the refolding yields to 82-47% in batch and 90-74% in the continuous process. Further experiments showed that in general, higher ionic strength of buffers had major impact on refolding yields; doubling the buffer concentration increased the yields up to threefold. Finally, we achieved corresponding high refolding yields by reducing the chamber volume by 75% and the amount of buffer needed. The refolded enzyme had an optimal activity of 12.5±0.3 x104 units/g. However, detailed experiments with native FTF revealed a reaggregation of the molecules and loss in specific activity depending on the enzyme concentration and particle size. For that reason, we actually focus on developing a process of simultaneous enzyme refolding and immobilization. The results of this study show a new approach in finding optimal refolding conditions for inclusion bodies at high concentrations. Straightforward buffer screening and increase of the ionic strength can optimize the refolding yield of the target protein by 400%. Gentle removal of chaotrope with continuous dialysis increases the yields by an additional 65%, independent of the refolding buffer applied. In general time is the crucial parameter for successful refolding of solubilized proteins.

Keywords: dialysis, inclusion body, refolding, solubilization

Procedia PDF Downloads 274
502 Barbie in India: A Study of Effects of Barbie in Psychological and Social Health

Authors: Suhrita Saha

Abstract:

Barbie is a fashion doll manufactured by the American toy company Mattel Inc and it made debut at the American International Toy Fair in New York in 9 March 1959. From being a fashion doll to a symbol of fetishistic commodification, Barbie has come a long way. A Barbie doll is sold every three seconds across the world, which makes the billion dollar brand the world’s most popular doll for the girls. The 11.5 inch moulded plastic doll has a height of 5 feet 9 inches at 1/6 scale. Her vital statistics have been estimated at 36 inches (chest), 18 inches (waist) and 33 inches (hips). Her weight is permanently set at 110 pounds which would be 35 pounds underweight. Ruth Handler, the creator of Barbie wanted a doll that represented adulthood and allowed children to imagine themselves as teenagers or adults. While Barbie might have been intended to be independent, imaginative and innovative, the physical uniqueness does not confine the doll to the status of a play thing. It is a cultural icon but with far reaching critical implications. The doll is a commodity bearing more social value than practical use value. The way Barbie is produced represents industrialization and commodification of the process of symbolic production. And this symbolic production and consumption is a standardized planned one that produce stereotypical ‘pseudo-individuality’ and suppresses cultural alternatives. Children are being subject to and also arise as subjects in this consumer context. A very gendered, physiologically dissected sexually charged symbolism is imposed upon children (both male and female), childhood, their social worlds, identity, and relationship formation. Barbie is also very popular among Indian children. While the doll is essentially an imaginative representation of the West, it is internalized by the Indian sensibilities. Through observation and questionnaire-based interview within a sample population of adolescent children (primarily female, a few male) and parents (primarily mothers) in Kolkata, an Indian metropolis, the paper puts forth findings of sociological relevance. 1. Barbie creates, recreates, and accentuates already existing divides between the binaries like male- female, fat- thin, sexy- nonsexy, beauty- brain and more. 2. The Indian girl child in her associative process with Barbie wants to be like her and commodifies her own self. The male child also readily accepts this standardized commodification. Definition of beauty is thus based on prejudice and stereotype. 3. Not being able to become Barbie creates health issues both psychological and physiological varying from anorexia to obesity as well as personality disorder. 4. From being a plaything Barbie becomes the game maker. Barbie along with many other forms of simulation further creates a consumer culture and market for all kind of fitness related hyper enchantment and subsequent disillusionment. The construct becomes the reality and the real gets lost in the play world. The paper would thus argue that Barbie from being an innocuous doll transports itself into becoming social construct with long term and irreversible adverse impact.

Keywords: barbie, commodification, personality disorder, sterotype

Procedia PDF Downloads 308
501 Radish Sprout Growth Dependency on LED Color in Plant Factory Experiment

Authors: Tatsuya Kasuga, Hidehisa Shimada, Kimio Oguchi

Abstract:

Recent rapid progress in ICT (Information and Communication Technology) has advanced the penetration of sensor networks (SNs) and their attractive applications. Agriculture is one of the fields well able to benefit from ICT. Plant factories control several parameters related to plant growth in closed areas such as air temperature, humidity, water, culture medium concentration, and artificial lighting by using computers and AI (Artificial Intelligence) is being researched in order to obtain stable and safe production of vegetables and medicinal plants all year anywhere, and attain self-sufficiency in food. By providing isolation from the natural environment, a plant factory can achieve higher productivity and safe products. However, the biggest issue with plant factories is the return on investment. Profits are tenuous because of the large initial investments and running costs, i.e. electric power, incurred. At present, LED (Light Emitting Diode) lights are being adopted because they are more energy-efficient and encourage photosynthesis better than the fluorescent lamps used in the past. However, further cost reduction is essential. This paper introduces experiments that reveal which color of LED lighting best enhances the growth of cultured radish sprouts. Radish sprouts were cultivated in the experimental environment formed by a hydroponics kit with three cultivation shelves (28 samples per shelf) each with an artificial lighting rack. Seven LED arrays of different color (white, blue, yellow green, green, yellow, orange, and red) were compared with a fluorescent lamp as the control. Lighting duration was set to 12 hours a day. Normal water with no fertilizer was circulated. Seven days after germination, the length, weight and area of leaf of each sample were measured. Electrical power consumption for all lighting arrangements was also measured. Results and discussions: As to average sample length, no clear difference was observed in terms of color. As regards weight, orange LED was less effective and the difference was significant (p < 0.05). As to leaf area, blue, yellow and orange LEDs were significantly less effective. However, all LEDs offered higher productivity per W consumed than the fluorescent lamp. Of the LEDs, the blue LED array attained the best results in terms of length, weight and area of leaf per W consumed. Conclusion and future works: An experiment on radish sprout cultivation under 7 different color LED arrays showed no clear difference in terms of sample size. However, if electrical power consumption is considered, LEDs offered about twice the growth rate of the fluorescent lamp. Among them, blue LEDs showed the best performance. Further cost reduction e.g. low power lighting remains a big issue for actual system deployment. An automatic plant monitoring system with sensors is another study target.

Keywords: electric power consumption, LED color, LED lighting, plant factory

Procedia PDF Downloads 162
500 Measuring Enterprise Growth: Pitfalls and Implications

Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić

Abstract:

Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.

Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises

Procedia PDF Downloads 228
499 Photophysics and Torsional Dynamics of Thioflavin T in Deep Eutectic Solvents

Authors: Rajesh Kumar Gautam, Debabrata Seth

Abstract:

Thioflavin-T (ThT) play a key role of an important biologically active fluorescent sensor for amyloid fibrils. ThT molecule has been developed a method to detect the analysis of different type of diseases such as neurodegenerative disorders, Alzheimer’s, Parkinson’s, and type II diabetes. ThT was used as a fluorescent marker to detect the formation of amyloid fibril. In the presence of amyloid fibril, ThT becomes highly fluorescent. ThT undergoes twisting motion around C-C bonds of the two adjacent benzothiazole and dimethylaniline aromatic rings, which is predominantly affected by the micro-viscosity of the local environment. The present study articulates photophysics and torsional dynamics of biologically active molecule ThT in the presence of deep-eutectic solvents (DESs). DESs are environment-friendly, low cost and biodegradable alternatives to the ionic liquids. DES resembles ionic liquids, but the constituents of a DES include a hydrogen bond donor and acceptor species, in addition to ions. Due to the presence of the H-bonding network within a DES, it exhibits structural heterogeneity. Herein, we have prepared two different DESs by mixing urea with choline chloride and N, N-diethyl ethanol ammonium chloride at ~ 340 K. It was reported that deep eutectic mixture of choline chloride with urea gave a liquid with a freezing point of 12°C. We have experimented by taking two different concentrations of ThT. It was observed that at higher concentration of ThT (50 µM) it forms aggregates in DES. The photophysics of ThT as a function of temperature have been explored by using steady-state, and picoseconds time-resolved fluorescence emission spectroscopic techniques. From the spectroscopic analysis, we have observed that with rising temperature the fluorescence quantum yields and lifetime values of ThT molecule gradually decreases; this is the cumulative effect of thermal quenching and increase in the rate of the torsional rate constant. The fluorescence quantum yield and fluorescence lifetime decay values were always higher for DES-II (urea & N, N-diethyl ethanol ammonium chloride) than those for DES-I (urea & choline chloride). This was mainly due to the presence of structural heterogeneity of the medium. This was further confirmed by comparison with the activation energy of viscous flow with the activation energy of non-radiative decay. ThT molecule in less viscous media undergoes a very fast twisting process and leads to deactivation from the photoexcited state. In this system, the torsional motion increases with increasing temperature. We have concluded that beside bulk viscosity of the media, structural heterogeneity of the medium play crucial role to guide the photophysics of ThT in DESs. The analysis of the experimental data was carried out in the temperature range 288 ≤ T = 333K. The present articulate is to obtain an insight into the DESs as media for studying various photophysical processes of amyloid fibrils sensing molecule of ThT.

Keywords: deep eutectic solvent, photophysics, Thioflavin T, the torsional rate constant

Procedia PDF Downloads 141