Search results for: features reduction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8286

Search results for: features reduction

1806 Rapid Detection of the Etiology of Infection as Bacterial or Viral Using Infrared Spectroscopy of White Blood Cells

Authors: Uraib Sharaha, Guy Beck, Joseph Kapelushnik, Adam H. Agbaria, Itshak Lapidot, Shaul Mordechai, Ahmad Salman, Mahmoud Huleihel

Abstract:

Infectious diseases cause a significant burden on the public health and the economic stability of societies all over the world for several centuries. A reliable detection of the causative agent of infection is not possible based on clinical features, since some of these infections have similar symptoms, including fever, sneezing, inflammation, vomiting, diarrhea, and fatigue. Moreover, physicians usually encounter difficulties in distinguishing between viral and bacterial infections based on symptoms. Therefore, there is an ongoing need for sensitive, specific, and rapid methods for identification of the etiology of the infection. This intricate issue perplex doctors and researchers since it has serious repercussions. In this study, we evaluated the potential of the mid-infrared spectroscopic method for rapid and reliable identification of bacterial and viral infections based on simple peripheral blood samples. Fourier transform infrared (FTIR) spectroscopy is considered a successful diagnostic method in the biological and medical fields. Many studies confirmed the great potential of the combination of FTIR spectroscopy and machine learning as a powerful diagnostic tool in medicine since it is a very sensitive method, which can detect and monitor the molecular and biochemical changes in biological samples. We believed that this method would play a major role in improving the health situation, raising the level of health in the community, and reducing the economic burdens in the health sector resulting from the indiscriminate use of antibiotics. We collected peripheral blood samples from young 364 patients, of which 93 were controls, 126 had bacterial infections, and 145 had viral infections, with ages lower than18 years old, limited to those who were diagnosed with fever-producing illness. Our preliminary results showed that it is possible to determine the infectious agent with high success rates of 82% for sensitivity and 80% for specificity, based on the WBC data.

Keywords: infectious diseases, (FTIR) spectroscopy, viral infections, bacterial infections.

Procedia PDF Downloads 111
1805 Synthesis of TiO₂/Graphene Nanocomposites with Excellent Visible-Light Photocatalytic Activity Based on Chemical Exfoliation Method

Authors: Nhan N. T. Ton, Anh T. N. Dao, Kouichirou Katou, Toshiaki Taniike

Abstract:

Facile electron-hole recombination and the broad band gap are two major drawbacks of titanium dioxide (TiO₂) when applied in visible-light photocatalysis. Hybridization of TiO₂ with graphene is a promising strategy to lessen these pitfalls. Recently, there have been many reports on the synthesis of TiO₂/graphene nanocomposites, in most of which graphene oxide (GO) was used as a starting material. However, the reduction of GO introduced a large number of defects on the graphene framework. In addition, the sensitivity of titanium alkoxide to water (GO usually contains) significantly obstructs the uniform and controlled growth of TiO₂ on graphene. Here, we demonstrate a novel technique to synthesize TiO₂/graphene nanocomposites without the use of GO. Graphene dispersion was obtained through the chemical exfoliation of graphite in titanium tetra-n-butoxide with the aid of ultrasonication. The dispersion was directly used for the sol-gel reaction in the presence of different catalysts. A TiO₂/reduced graphene oxide (TiO₂/rGO) nanocomposite, which was prepared by a solvothermal method from GO, and the commercial TiO₂-P25 were used as references. It was found that titanium alkoxide afforded the graphene dispersion of a high quality in terms of a trace amount of defects and a few layers of dispersed graphene. Moreover, the sol-gel reaction from this dispersion led to TiO₂/graphene nanocomposites featured with promising characteristics for visible-light photocatalysts including: (I) the formation of a TiO₂ nano layer (thickness ranging from 1 nm to 5 nm) that uniformly and thinly covered graphene sheets, (II) a trace amount of defects on the graphene framework (low ID/IG ratio: 0.21), (III) a significant extension of the absorption edge into the visible light region (a remarkable extension of the absorption edge to 578 nm beside the usual edge at 360 nm), and (IV) a dramatic suppression of electron-hole recombination (the lowest photoluminescence intensity compared to reference samples). These advantages were successfully demonstrated in the photocatalytic decomposition of methylene blue under visible light irradiation. The TiO₂/graphene nanocomposites exhibited 15 and 5 times higher activity than TiO₂-P25 and the TiO₂/rGO nanocomposite, respectively.

Keywords: chemical exfoliation, photocatalyst, TiO₂/graphene, sol-gel reaction

Procedia PDF Downloads 137
1804 Crop Losses, Produce Storage and Food Security, the Nexus: Attaining Sustainable Maize Production in Nigeria

Authors: Charles Iledun Oyewole, Harira Shuaib

Abstract:

While fulfilling the food security of an increasing population like Nigeria remains a major global concern, more than one-third of crop harvested is lost or wasted during harvesting or in postharvest operations. Reducing the harvest and postharvest losses, especially in developing countries, could be a sustainable solution to increase food availability, eliminate hunger and improve farmers’ livelihoods. Nigeria is one of the countries in sub-Saharan Africa with insufficient food production and high food import bill, which has had debilitating effects on the country’s economy. One of the goals of Nigeria’s agricultural development policy is to ensure that, the nation produces enough food and be less dependent on importation so as to ensure adequate and affordable food for all. Maize could fill the food gap in Nigeria’s effort to beat hunger and food insecurity. Maize is the most important cereal after rice and its production contributes immensely to food availability on the tables of many Nigerians. Maize grains constitute primary source of food for large percentage of the Nigerian populace, thus a considerable waste of this valuable food pre and post-harvest constitutes such a major agricultural bottleneck; that the reduction of pre and post-harvest losses is now a common food security strategy. In surveys conducted, as much as 60% maize outputs can be lost on the field and during the storage stage due to technical inefficiency. Field losses due to rodent damage alone can account for between 10% - 60% grain losses depending on the location. While the use of scientific storage methods can reduce losses below 2% in storage, timely harvesting of crop can check losses on the fields resulting from rodent damage or pest infestation. A push for increased crop production must be complemented by available and affordable post-harvest technologies that will reduce losses on farmers’ fields as well as in storage.

Keywords: government policy, maize, population increase, storage, sustainable food production, yield, yield losses

Procedia PDF Downloads 120
1803 Implementation of Quality Function Development to Incorporate Customer’s Value in the Conceptual Design Stage of a Construction Projects

Authors: Ayedh Alqahtani

Abstract:

Many construction firms in Saudi Arabia dedicated to building projects agree that the most important factor in the real estate market is the value that they can give to their customer. These firms understand the value of their client in different ways. Value can be defined as the size of the building project in relationship to the cost or the design quality of the materials utilized in finish work or any other features of building rooms such as the bathroom. Value can also be understood as something suitable for the money the client is investing for the new property. A quality tool is required to support companies to achieve a solution for the building project and to understand and manage the customer’s needs. Quality Function Development (QFD) method will be able to play this role since the main difference between QFD and other conventional quality management tools is QFD a valuable and very flexible tool for design and taking into the account the VOC. Currently, organizations and agencies are seeking suitable models able to deal better with uncertainty, and that is flexible and easy to use. The primary aim of this research project is to incorporate customer’s requirements in the conceptual design of construction projects. Towards this goal, QFD is selected due to its capability to integrate the design requirements to meet the customer’s needs. To develop QFD, this research focused upon the contribution of the different (significantly weighted) input factors that represent the main variables influencing QFD and subsequent analysis of the techniques used to measure them. First of all, this research will review the literature to determine the current practice of QFD in construction projects. Then, the researcher will review the literature to define the current customers of residential projects and gather information on customers’ requirements for the design of the residential building. After that, qualitative survey research will be conducted to rank customer’s needs and provide the views of stakeholder practitioners about how these needs can affect their satisfy. Moreover, a qualitative focus group with the members of the design team will be conducted to determine the improvements level and technical details for the design of residential buildings. Finally, the QFD will be developed to establish the degree of significance of the design’s solution.

Keywords: quality function development, construction projects, Saudi Arabia, quality tools

Procedia PDF Downloads 101
1802 Effect of Geometric Imperfections on the Vibration Response of Hexagonal Lattices

Authors: P. Caimmi, E. Bele, A. Abolfathi

Abstract:

Lattice materials are cellular structures composed of a periodic network of beams. They offer high weight-specific mechanical properties and lend themselves to numerous weight-sensitive applications. The periodic internal structure responds to external vibrations through characteristic frequency bandgaps, making these materials suitable for the reduction of noise and vibration. However, the deviation from architectural homogeneity, due to, e.g., manufacturing imperfections, has a strong influence on the mechanical properties and vibration response of these materials. In this work, we present results on the influence of geometric imperfections on the vibration response of hexagonal lattices. Three classes of geometrical variables are used: the characteristics of the architecture (relative density, ligament length/cell size ratio), imperfection type (degree of non-periodicity, cracks, hard inclusions) and defect morphology (size, distribution). Test specimens with controlled size and distribution of imperfections are manufactured through selective laser sintering. The Frequency Response Functions (FRFs) in the form of accelerance are measured, and the modal shapes are captured through a high-speed camera. The finite element method is used to provide insights on the extension of these results to semi-infinite lattices. An updating procedure is conducted to increase the reliability of numerical simulation results compared to experimental measurements. This is achieved by updating the boundary conditions and material stiffness. Variations in FRFs of periodic structures due to changes in the relative density of the constituent unit cell are analysed. The effects of geometric imperfections on the dynamic response of periodic structures are investigated. The findings can be used to open up the opportunity for tailoring these lattice materials to achieve optimal amplitude attenuations at specific frequency ranges.

Keywords: lattice architectures, geometric imperfections, vibration attenuation, experimental modal analysis

Procedia PDF Downloads 103
1801 Regeneration Nature of Rumex Species Root Fragment as Affected by Desiccation

Authors: Khalid Alshallash

Abstract:

Small fragments of the roots of some Rumex species including R. obtusifolius and R. crispus have been found to regenerate readily, contributing to the severity of infestations by these very common, widespread and difficult to control perennial weeds of agricultural crops and grasslands. Their root fragments are usually created during routine agricultural practices. We found that fresh root fragments of both species containing 65-70 % of moisture, progressively lose their moisture content when desiccated under controlled growth room conditions matching summer weather of southeast England, with the greatest reduction occurring in the first 48 hours. Probability of shoot emergence and the time taken for emergence in glasshouse conditions were also reduced significantly by desiccation, with R. obtusifolius least affected up to 48-hour. However, the effects converged after 120 hours. In contrast, R. obtusifolius was significantly slower to emerge after up to 48 hours desiccation, again effects converging after longer periods, R. crispus entirely failed to emerge at 120 hours. The dry weight of emerged shoots was not significantly different between the species, until desiccated for 96 hours when R. obtusifolius was significantly reduced. At 120 hours, R. obtusifolius did not emerge. In outdoor trials, desiccation for 24 or 48 hours had less effect on emergence when planted at the soil surface or up to 10 cm of depth, compared to deeper plantings. In both species, emergence was significantly lower when desiccated fragments were planted at 15 or 20 cm. Time taken for emergence was not significantly different between the species until planted at 15 or 20 cm when R. obtusifolius was slower than R. crispus and reduced further by increasing desiccation. Similar variation in effects of increasing soil depth interacting with increasing desiccation was found in reductions in dry weight, the number of tillers and leaf area, with R obtusifolius generally but not exclusively better able to withstand more extreme trial conditions. Our findings suggest that infestations of these highly troublesome weeds may be partly controlled by appropriate agricultural practices, notably exposing cut fragments to drying environmental conditions followed by deep burial.

Keywords: regeneration, root fragment, rumex crispus, rumex obtusifolius

Procedia PDF Downloads 72
1800 Solid Lipid Nanoparticles of Levamisole Hydrochloride

Authors: Surendra Agrawal, Pravina Gurjar, Supriya Bhide, Ram Gaud

Abstract:

Levamisole hydrochloride is a prominent anticancer drug in the treatment of colon cancer but resulted in toxic effects due poor bioavailability and poor cellular uptake by tumor cells. Levamisole is an unstable drug. Incorporation of this molecule in solid lipids may minimize their exposure to the aqueous environment and partly immobilize the drug molecules within the lipid matrix-both of which may protect the encapsulated drugs against degradation. The objectives of the study were to enhance bioavailability by sustaining drug release and to reduce the toxicities associated with the therapy. Solubility of the drug was determined in different lipids to select the components of Solid Lipid Nanoparticles (SLN). Pseudoternary phase diagrams were created using aqueous titration method. Formulations were subjected to particle size and stability evaluation to select the final test formulations which were characterized for average particle size, zeta potential, and in-vitro drug release and percentage transmittance to optimize the final formulation. SLN of Levamisole hydrochloride was prepared by Nanoprecipitation method. Glyceryl behenate (Compritol 888 ATO) was used as core comprising of Tween 80 as surfactant and Lecithin as co-surfactant in (1:1) ratio. Entrapment efficiency (EE) was found to be 45.89%. Particle size was found in the range of 100-600 nm. Zeta potential of the formulation was -17.0 mV revealing the stability of the product. In-vitro release study showed that 66 % drug released in 24 hours in pH 7.2 which represent that formulation can give controlled action at the intestinal environment. In pH 5.0 it showed 64% release indicating that it can even release drug in acidic environment of tumor cells. In conclusion, results revealed SLN to be a promising approach to sustain the drug release so as to increase bioavailability and cellular uptake of the drug with reduction in toxic effects as dose has been reduced with controlled delivery.

Keywords: SLN, nanoparticulate delivery of levamisole, pharmacy, pharmaceutical sciences

Procedia PDF Downloads 409
1799 Phosphate Use Efficiency in Plants: A GWAS Approach to Identify the Pathways Involved

Authors: Azizah M. Nahari, Peter Doerner

Abstract:

Phosphate (Pi) is one of the essential macronutrients in plant growth and development, and it plays a central role in metabolic processes in plants, particularly photosynthesis and respiration. Limitation of crop productivity by Pi is widespread and is likely to increase in the future. Applications of Pi fertilizers have improved soil Pi fertility and crop production; however, they have also caused environmental damage. Therefore, in order to reduce dependence on unsustainable Pi fertilizers, a better understanding of phosphate use efficiency (PUE) is required for engineering nutrient-efficient crop plants. Enhanced Pi efficiency can be achieved by improved productivity per unit Pi taken up. We aim to identify, by using association mapping, general features of the most important loci that contribute to increased PUE to allow us to delineate the physiological pathways involved in defining this trait in the model plant Arabidopsis. As PUE is in part determined by the efficiency of uptake, we designed a hydroponic system to avoid confounding effects due to differences in root system architecture leading to differences in Pi uptake. In this system, 18 parental lines and 217 lines of the MAGIC population (a Multiparent Advanced Generation Inter-Cross) grown in high and low Pi availability conditions. The results showed revealed a large variation of PUE in the parental lines, indicating that the MAGIC population was well suited to identify PUE loci and pathways. 2 of 18 parental lines had the highest PUE in low Pi while some lines responded strongly and increased PUE with increased Pi. Having examined the 217 MAGIC population, considerable variance in PUE was found. A general feature was the trend of most lines to exhibit higher PUE when grown in low Pi conditions. Association mapping is currently in progress, but initial observations indicate that a wide variety of physiological processes are involved in influencing PUE in Arabidopsis. The combination of hydroponic growth methods and genome-wide association mapping is a powerful tool to identify the physiological pathways underpinning complex quantitative traits in plants.

Keywords: hydroponic system growth, phosphate use efficiency (PUE), Genome-wide association mapping, MAGIC population

Procedia PDF Downloads 298
1798 Scheduling Building Projects: The Chronographical Modeling Concept

Authors: Adel Francis

Abstract:

Most of scheduling methods and software apply the critical path logic. This logic schedule activities, apply constraints between these activities and try to optimize and level the allocated resources. The extensive use of this logic produces a complex an erroneous network hard to present, follow and update. Planning and management building projects should tackle the coordination of works and the management of limited spaces, traffic, and supplies. Activities cannot be performed without the resources available and resources cannot be used beyond the capacity of workplaces. Otherwise, workspace congestion will negatively affect the flow of works. The objective of the space planning is to link the spatial and temporal aspects, promote efficient use of the site, define optimal site occupancy rates, and ensures suitable rotation of the workforce in the different spaces. The Chronographic scheduling modelling belongs to this category and models construction operations as well as their processes, logical constraints, association and organizational models, which help to better illustrate the schedule information using multiple flexible approaches. The model defined three categories of areas (punctual, surface and linear) and four different layers (space creation, systems, closing off space, finishing, and reduction of space). The Chronographical modelling is a more complete communication method, having the ability to alternate from one visual approach to another by manipulation of graphics via a set of parameters and their associated values. Each individual approach can help to schedule a certain project type or specialty. Visual communication can also be improved through layering, sheeting, juxtaposition, alterations, and permutations, allowing for groupings, hierarchies, and classification of project information. In this way, graphic representation becomes a living, transformable image, showing valuable information in a clear and comprehensible manner, simplifying the site management while simultaneously utilizing the visual space as efficiently as possible.

Keywords: building projects, chronographic modelling, CPM, critical path, precedence diagram, scheduling

Procedia PDF Downloads 130
1797 Teaching Business Process Management using IBM’s INNOV8 BPM Simulation Game

Authors: Hossam Ali-Hassan, Michael Bliemel

Abstract:

This poster reflects upon our experiences using INNOV8, IBM’s Business Process Management (BPM) simulation game, in online MBA and undergraduate MIS classes over a period of 2 years. The game is designed to gives both business and information technology players a better understanding of how effective BPM impacts an entire business ecosystem. The game includes three different scenarios: Smarter Traffic, which is used to evaluate existing traffic patterns and re-route traffic based on incoming metrics; Smarter Customer Service where players develop more efficient ways to respond to customers in a call centre environment; and Smarter Supply Chains where players balance supply and demand and reduce environmental impact in a traditional supply chain model. We use the game as an experiential learning tool, where students have to act as managers making real time changes to business processes to meet changing business demands and environments. The students learn how information technology (IT) and information systems (IS) can be used to intelligently solve different problems and how computer simulations can be used to test different scenarios or models based on business decisions without having to actually make the potentially costly and/or disruptive changes to business processes. Moreover, when students play the three different scenarios, they quickly see how practical process improvements can help meet profitability, customer satisfaction and environmental goals while addressing real problems faced by municipalities and businesses today. After spending approximately two hours in the game, students reflect on their experience from it to apply several BPM principles that were presented in their textbook through the use of a structured set of assignment questions. For each final scenario students submit a screenshot of their solution followed by one paragraph explaining what criteria you were trying to optimize, and why they picked their input variables. In this poster we outline the course and the module’s learning objectives where we used the game to place this into context. We illustrate key features of the INNOV8 Simulation Game, and describe how we used them to reinforce theoretical concepts. The poster will also illustrate examples from the simulation, assignment, and learning outcomes.

Keywords: experiential learning, business process management, BPM, INNOV8, simulation, game

Procedia PDF Downloads 313
1796 Effect of High Dose of Vitamin C in Reduction Serum Uric Acid: a Comparative Study between Hyperuricemic and Gouty Patients in Jeddah

Authors: Firas S. Azzeh

Abstract:

Background: Vitamin C is a water soluble vitamin that is necessary for normal growth and development. Hyperuricemia is commonly detected in subjects with abnormal purine metabolism. Prolonged hyperuricemia is an important risk factor for damaged joint and often associated with gout. Objectives: To compare the effect of high dose of vitamin C supplements on uric acid treatment between hyperuricemic and gouty patients in Jeddah, Saudi Arabia, as well as finding out the effect of vitamin C on serum creatinine level and glomerular filtration rate (GFR). Subjects and Methods: This comparative study started on April 2013 and lasted tells March 2014. A convenience sample of 30 adults was recruited in this study from Doctor Abdulrahman Taha Bakhsh Hospital in Jeddah (Saudi Arabia). Eligible persons were assigned into two study groups; hyperuricemic (n=15) and gouty (n=15) groups. Subjects have been accepted for participating in the study after completing the consent form. Each participant consumed 500 mg/day vitamin C chew able tablets. All participants have been followed-up for 2 months. Twelve hours fasting blood samples have been collected 3 times from each participant during the study period; at the beginning before and retested after each month of the study period. Uric acid, serum creatinine and GFR were measured. Results: For gouty group, uric acid increased insignificantly after 2 months by about +0.3 mg/dl. On the other hand, hyperuricemic group showed decrease (P ≤ 0.05) in uric acid after 2 months of study period by about -0.78 mg/dl. Serum creatinine level insignificantly decreased for all participants during the study period, which leaded to insignificant increase in GFR for all participants. Conclusion: Supplementation with 500 mg/day vitamin C for 2 months significantly reduced serum uric acid for hyperuricemic patients and insignificantly increased serum uric acid for gouty patients. The ineffectiveness of vitamin C supplements on patients with established gout could be related to a number of potential reasons.

Keywords: vitamin c, Hyperuricemia, gout, creatinine, GFR

Procedia PDF Downloads 364
1795 A Longitudinal Case Study of Greek as a Second Language

Authors: M. Vassou, A. Karasimos

Abstract:

A primary concern in the field of Second Language Acquisition (SLA) research is to determine the innate mechanisms of second language learning and acquisition through the systematic study of a learner's interlanguage. Errors emerge while a learner attempts to communicate using the target-language and can be seen either as the observable linguistic product of the latent cognitive and language process of mental representations or as an indispensable learning mechanism. Therefore, the study of the learner’s erroneous forms may depict the various strategies and mechanisms that take place during the language acquisition process resulting in deviations from the target-language norms and difficulties in communication. Mapping the erroneous utterances of a late adult learner in the process of acquiring Greek as a second language constitutes one of the main aims of this study. For our research purposes, we created an error-tagged learner corpus composed of the participant’s written texts produced throughout a period of a 4- year instructed language acquisition. Error analysis and interlanguage theory constitute the methodological and theoretical framework, respectively. The research questions pertain to the learner's most frequent errors per linguistic category and per year as well as his choices concerning the Greek Article System. According to the quantitative analysis of the data, the most frequent errors are observed in the categories of the stress system and syntax, whereas a significant fluctuation and/or gradual reduction throughout the 4 years of instructed acquisition indicate the emergence of developmental stages. The findings with regard to the article usage bespeak fossilization of erroneous structures in certain contexts. In general, our results point towards the existence and further development of an established learner’s (inter-) language system governed not only by mother- tongue and target-language influences but also by the learner’s assumptions and set of rules as the result of a complex cognitive process. It is expected that this study will contribute not only to the knowledge in the field of Greek as a second language and SLA generally, but it will also provide an insight into the cognitive mechanisms and strategies developed by multilingual learners of late adulthood.

Keywords: Greek as a second language, error analysis, interlanguage, late adult learner

Procedia PDF Downloads 108
1794 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 185
1793 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform

Authors: Khadija Refouh

Abstract:

Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.

Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms

Procedia PDF Downloads 114
1792 Corporate Social Responsibility (CSR) and Energy Efficiency: Empirical Evidence from the Manufacturing Sector of India

Authors: Baikunthanath Sahoo, Santosh Kumar Sahu, Krishna Malakar

Abstract:

With the essence of global environmental sustainability and green business management, the wind of business research moved towards Corporate Social Responsibility. In addition to international and national treaties, businesses have also started realising environmental protection and energy efficiency through CSR as part of business strategy in response to climate change. Considering the ambitious emission reduction target and rapid economic development of India, this study is an attempt to explore the effect of CSR on the energy efficiency management of manufacturing firms in India. By using firm-level data, the panel fixed effect model shows that the CSR dummy variable is negatively influencing the energy intensity or technically, they are energy efficient. The result demonstrates that in the presence of CSR, all the production economic variables are significant. The result also shows that doing environmental expenditure does not improve energy efficiency might be because very few firms are motivated to do such expenditure and also not common to all sectors. The interactive effect model result conforms that without considering CSR dummy as an intervening variable only Manufacturers of Chemical and Chemical products, Manufacturers of Pharmaceutical, medical chemical, and botanical products firms energy intensity low but after considering CSR in their business practices all six sub-sector firms become energy efficient. The empirical result also validate that firms are continuously engaged in CSR activities they are highly energy efficient. It is an important motivational factor for firms to become economically and environmentally sustainable in the corporate world. This analysis would help business practitioners to know how to manage today’s profitability and tomorrow’s sustainability to achieve a comparative advantage in the emerging market economy. The paper concludes that reducing energy consumption as part of their social responsibility to care for the environment, will need collaborative efforts of business society and policy bodies.

Keywords: CSR, Energy Efficiency, Indian manufacturing Sector, Business strategy

Procedia PDF Downloads 62
1791 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam

Authors: Sahand Golmohammadi, Sana Hosseini Shirazi

Abstract:

Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.

Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel

Procedia PDF Downloads 46
1790 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 135
1789 Regional Flood Frequency Analysis in Narmada Basin: A Case Study

Authors: Ankit Shah, R. K. Shrivastava

Abstract:

Flood and drought are two main features of hydrology which affect the human life. Floods are natural disasters which cause millions of rupees’ worth of damage each year in India and the whole world. Flood causes destruction in form of life and property. An accurate estimate of the flood damage potential is a key element to an effective, nationwide flood damage abatement program. Also, the increase in demand of water due to increase in population, industrial and agricultural growth, has let us know that though being a renewable resource it cannot be taken for granted. We have to optimize the use of water according to circumstances and conditions and need to harness it which can be done by construction of hydraulic structures. For their safe and proper functioning of hydraulic structures, we need to predict the flood magnitude and its impact. Hydraulic structures play a key role in harnessing and optimization of flood water which in turn results in safe and maximum use of water available. Mainly hydraulic structures are constructed on ungauged sites. There are two methods by which we can estimate flood viz. generation of Unit Hydrographs and Flood Frequency Analysis. In this study, Regional Flood Frequency Analysis has been employed. There are many methods for estimating the ‘Regional Flood Frequency Analysis’ viz. Index Flood Method. National Environmental and Research Council (NERC Methods), Multiple Regression Method, etc. However, none of the methods can be considered universal for every situation and location. The Narmada basin is located in Central India. It is drained by most of the tributaries, most of which are ungauged. Therefore it is very difficult to estimate flood on these tributaries and in the main river. As mentioned above Artificial Neural Network (ANN)s and Multiple Regression Method is used for determination of Regional flood Frequency. The annual peak flood data of 20 sites gauging sites of Narmada Basin is used in the present study to determine the Regional Flood relationships. Homogeneity of the considered sites is determined by using the Index Flood Method. Flood relationships obtained by both the methods are compared with each other, and it is found that ANN is more reliable than Multiple Regression Method for the present study area.

Keywords: artificial neural network, index flood method, multi layer perceptrons, multiple regression, Narmada basin, regional flood frequency

Procedia PDF Downloads 392
1788 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling

Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal

Abstract:

Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.

Keywords: ABET, accreditation, benchmark collection, machine learning, program educational objectives, student outcomes, supervised multi-class classification, text mining

Procedia PDF Downloads 147
1787 Subtitling in the Classroom: Combining Language Mediation, ICT and Audiovisual Material

Authors: Rossella Resi

Abstract:

This paper describes a project carried out in an Italian school with English learning pupils combining three didactic tools which are attested to be relevant for the success of young learner’s language curriculum: the use of technology, the intralingual and interlingual mediation (according to CEFR) and the cultural dimension. Aim of this project was to test a technological hands-on translation activity like subtitling in a formal teaching context and to exploit its potential as motivational tool for developing listening and writing, translation and cross-cultural skills among language learners. The activities proposed involved the use of professional subtitling software called Aegisub and culture-specific films. The workshop was optional so motivation was entirely based on the pleasure of engaging in the use of a realistic subtitling program and on the challenge of meeting the constraints that a real life/work situation might involve. Twelve pupils in the age between 16 and 18 have attended the afternoon workshop. The workshop was organized in three parts: (i) An introduction where the learners were opened up to the concept and constraints of subtitling and provided with few basic rules on spotting and segmentation. During this session learners had also the time to familiarize with the main software features. (ii) The second part involved three subtitling activities in plenum or in groups. In the first activity the learners experienced the technical dimensions of subtitling. They were provided with a short video segment together with its transcription to be segmented and time-spotted. The second activity involved also oral comprehension. Learners had to understand and transcribe a video segment before subtitling it. The third activity embedded a translation activity of a provided transcription including segmentation and spotting of subtitles. (iii) The workshop ended with a small final project. At this point learners were able to master a short subtitling assignment (transcription, translation, segmenting and spotting) on their own with a similar video interview. The results of these assignments were above expectations since the learners were highly motivated by the authentic and original nature of the assignment. The subtitled videos were evaluated and watched in the regular classroom together with other students who did not take part to the workshop.

Keywords: ICT, L2, language learning, language mediation, subtitling

Procedia PDF Downloads 393
1786 Modelling and Simulation of Natural Gas-Fired Power Plant Integrated to a CO2 Capture Plant

Authors: Ebuwa Osagie, Chet Biliyok, Yeung Hoi

Abstract:

Regeneration energy requirement and ways to reduce it is the main aim of most CO2 capture researches currently being performed and thus, post-combustion carbon capture (PCC) option is identified to be the most suitable for the natural gas-fired power plants. From current research and development (R&D) activities worldwide, two main areas are being examined in order to reduce the regeneration energy requirement of amine-based PCC, namely: (a) development of new solvents with better overall performance than 30wt% monoethanolamine (MEA) aqueous solution, which is considered as the base-line solvent for solvent-based PCC, (b) Integration of the PCC Plant to the power plant. In scaling-up a PCC pilot plant to the size required for a commercial-scale natural gas-fired power plant, process modelling and simulation is very essential. In this work, an integrated process made up of a 482MWe natural gas-fired power plant, an MEA-based PCC plant which is developed and validated has been modelled and simulated. The PCC plant has four absorber columns and a single stripper column, the modelling and simulation was performed with Aspen Plus® V8.4. The gas turbine, the heat recovery steam generator and the steam cycle were modelled based on a 2010 US DOE report, while the MEA-based PCC plant was modelled as a rate-based process. The scaling of the amine plant was performed using a rate based calculation in preference to the equilibrium based approach for 90% CO2 capture. The power plant was integrated to the PCC plant in three ways: (i) flue gas stream from the power plant which is divided equally into four stream and each stream is fed into one of the four absorbers in the PCC plant. (ii) Steam draw-off from the IP/LP cross-over pipe in the steam cycle of the power plant used to regenerate solvent in the reboiler. (iii) Condensate returns from the reboiler to the power plant. The integration of a PCC plant to the NGCC plant resulted in a reduction of the power plant output by 73.56 MWe and the net efficiency of the integrated system is reduced by 7.3 % point efficiency. A secondary aim of this study is the parametric studies which have been performed to assess the impacts of natural gas on the overall performance of the integrated process and this is achieved through investigation of the capture efficiencies.

Keywords: natural gas-fired, power plant, MEA, CO2 capture, modelling, simulation

Procedia PDF Downloads 420
1785 Assessing the Impact of Quinoa Cultivation Adopted to Produce a Secure Food Crop and Poverty Reduction by Farmers in Rural Pakistan

Authors: Ejaz Ashraf, Raheel Babar, Muhammad Yaseen, Hafiz Khurram Shurjeel, Nosheen Fatima

Abstract:

Main purpose of this study was to assess adoption level of farmers for quinoa cultivation after they had been taught through training and visit extension approach. At this time of the 21st century, population structure, climate change, food requirements and eating habits of people are changing rapidly. In this scenario, farmers must play their key role in sustainable crop development and production through adoption of new crops that may also be helpful to overcome the issue of food insecurity as well as reducing poverty in rural areas. Its cultivation in Pakistan is at the early stages and there is a need to raise awareness among farmers to grow quinoa crops. In the middle of the 2015, a training and visit extension approach was used to raise awareness and convince farmers to grow quinoa in the area. During training and visit extension program, 80 farmers were randomly selected for the training of quinoa cultivation. Later on, these farmers trained 60 more farmers living into their neighborhood. After six months, a survey was conducted with all 140 farmers to assess the impact of the training and visit program on adoption level of respondents for the quinoa crop. The survey instrument was developed with the help of literature review and other experts of the crop. Validity and reliability of the instrument were checked before complete data collection. The data were analyzed by using SPSS. Multiple regression analysis was used for interpretation of the results from the survey, which indicated that factors like information/ training, change in agronomic and plant protection practices play a key role in the adoption of quinoa cultivation by respondents. In addition, the model explains more than 50% of variation in the adoption level of respondents. It is concluded that farmers need timely information for improved knowledge of agronomic and plant protection practices to adopt cultivation of the quinoa crop in the area.

Keywords: farmers, quinoa, adoption, contact, training and visit

Procedia PDF Downloads 333
1784 Experimental Investigation on the Effect of Cross Flow on Discharge Coefficient of an Orifice

Authors: Mathew Saxon A, Aneeh Rajan, Sajeev P

Abstract:

Many fluid flow applications employ different types of orifices to control the flow rate or to reduce the pressure. Discharge coefficients generally vary from 0.6 to 0.95 depending on the type of the orifice. The tabulated value of discharge coefficients of various types of orifices available can be used in most common applications. The upstream and downstream flow condition of an orifice is hardly considered while choosing the discharge coefficient of an orifice. But literature shows that the discharge coefficient can be affected by the presence of cross flow. Cross flow is defined as the condition wherein; a fluid is injected nearly perpendicular to a flowing fluid. Most researchers have worked on water being injected into a cross-flow of water. The present work deals with water to gas systems in which water is injected in a normal direction into a flowing stream of gas. The test article used in the current work is called thermal regulator, which is used in a liquid rocket engine to reduce the temperature of hot gas tapped from the gas generator by injecting water into the hot gas so that a cooler gas can be supplied to the turbine. In a thermal regulator, water is injected through an orifice in a normal direction into the hot gas stream. But the injection orifice had been calibrated under backpressure by maintaining a stagnant gas medium at the downstream. The motivation of the present study aroused due to the observation of a lower Cd of the orifice in flight compared to the calibrated Cd. A systematic experimental investigation is carried out in this paper to study the effect of cross-flow on the discharge coefficient of an orifice in water to a gas system. The study reveals that there is an appreciable reduction in the discharge coefficient with cross flow compared to that without cross flow. It is found that the discharge coefficient greatly depends on the ratio of momentum of water injected to the momentum of the gas cross flow. The effective discharge coefficient of different orifices was normalized using the discharge coefficient without cross-flow and it is observed that normalized curves of effective discharge coefficient of different orifices with momentum ratio collapsing into a single curve. Further, an equation is formulated using the test data to predict the effective discharge coefficient with cross flow using the calibrated Cd value without cross flow.

Keywords: cross flow, discharge coefficient, orifice, momentum ratio

Procedia PDF Downloads 116
1783 Influence of the Quality of the Recycled Aggregates in Concrete Pavement

Authors: Viviana Letelier, Ester Tarela, Bianca Lopez, Pedro Muñoz, Giacomo Moriconi

Abstract:

The environmental impact has become a global concern during the last decades. Several alternatives have been proposed and studied to minimize this impact in different areas. The reuse of aggregates from old concretes to manufacture new ones not only can reduce this impact but is also a way to optimize the resource management. The effect of the origin of the reused aggregates from two different origin materials in recycled concrete pavement is studied here. Using the dosing applied by a pavement company, coarse aggregates in the 6.3-25 mm fraction are replaced by recycled aggregates with two different origins: old concrete pavements with similar origin strength to the one of the control concrete, and precast concrete pipes with smaller strengths than the one of the control concrete. The replacement percentages tested are 30%, 40% and 50% in both cases. The compressive strength tests are performed after 7, 14, 28 and 90 curing days, the flexural strength tests and the elasticity modulus tests after 28 and 90 curing days. Results show that the influence of the quality of the origin concrete in the mechanical properties of recycled concretes is not despicable. Concretes with up to a 50% of recycled aggregates from the concrete pavement have similar compressive strengths to the ones of the control concrete and slightly smaller flexural strengths that, however, in all cases exceed the minimum of 5MPa after 28 curing days stablished by the Chilean regulation for pavement concretes. On the other hand, concretes with recycled aggregates from precast concrete pipes show significantly lower compressive strengths after 28 curing days. The differences with the compressive strength of the control concrete increase with the percentage of replacement, reaching a 13% reduction when 50% of the aggregates are replaced. The flexural strength also suffers significant reductions that increase with the percentage of replacement, only obeying the Chilean regulation when 30% of the aggregates are recycled after 28 curing days. Nevertheless, after 90 curing days, all series obey the regulation requirements. Results show, not only the importance of the quality of the origin concrete, but also the significance of the curing days, that may allow the use of less quality recycled material without important strength losses.

Keywords: flexural strength of recycled concrete., mechanical properties of recycled concrete, recycled aggregates, recycled concrete pavements

Procedia PDF Downloads 226
1782 Heteroatom Doped Binary Metal Oxide Modified Carbon as a Bifunctional Electrocatalysts for all Vanadium Redox Flow Battery

Authors: Anteneh Wodaje Bayeh, Daniel Manaye Kabtamu, Chen-Hao Wang

Abstract:

As one of the most promising electrochemical energy storage systems, vanadium redox flow batteries (VRFBs) have received increasing attention owing to their attractive features for largescale storage applications. However, their high production cost and relatively low energy efficiency still limit their feasibility. For practical implementation, it is of great interest to improve their efficiency and reduce their cost. One of the key components of VRFBs that can greatly influence the efficiency and final cost is the electrode, which provide the reactions sites for redox couples (VO²⁺/VO₂ + and V²⁺/V³⁺). Carbon-based materials are considered to be the most feasible electrode materials in the VRFB because of their excellent potential in terms of operation range, good permeability, large surface area, and reasonable cost. However, owing to limited electrochemical activity and reversibility and poor wettability due to its hydrophobic properties, the performance of the cell employing carbon-based electrodes remained limited. To address the challenges, we synthesized heteroatom-doped bimetallic oxide grown on the surface of carbon through the one-step approach. When applied to VRFBs, the prepared electrode exhibits significant electrocatalytic effect toward the VO²⁺/VO₂ + and V³⁺/V²⁺ redox reaction compared with that of pristine carbon. It is found that the presence of heteroatom on metal oxide promotes the absorption of vanadium ions. The controlled morphology of bimetallic metal oxide also exposes more active sites for the redox reaction of vanadium ions. Hence, the prepared electrode displays the best electrochemical performance with energy and voltage efficiencies of 74.8% and 78.9%, respectively, which is much higher than those of 59.8% and 63.2% obtained from the pristine carbon at high current density. Moreover, the electrode exhibit durability and stability in an acidic electrolyte during long-term operation for 1000 cycles at the higher current density.

Keywords: VRFB, VO²⁺/VO₂ + and V³⁺/V²⁺ redox couples, graphite felt, heteroatom-doping

Procedia PDF Downloads 67
1781 Opportunities and Challenges for Decarbonizing Steel Production by Creating Markets for ‘Green Steel’ Products

Authors: Hasan Muslemani, Xi Liang, Kathi Kaesehage, Francisco Ascui, Jeffrey Wilson

Abstract:

The creation of a market for lower-carbon steel products, here called ‘green steel’, has been identified as an important means to support the introduction of breakthrough emission reduction technologies into the steel sector. However, the definition of what ‘green’ entails in the context of steel production, the implications on the competitiveness of green steel products in local and international markets, and the necessary market mechanisms to support their successful market penetration remain poorly explored. This paper addresses this gap by holding semi-structured interviews with international sustainability experts and commercial managers from leading steel trade associations, research institutes and steelmakers. Our findings show that there is an urgent need to establish a set of standards to define what ‘greenness’ means in the steelmaking context; standards that avoid market disruptions, unintended consequences, and opportunities for greenwashing. We also highlight that the introduction of green steel products will have implications on product competitiveness on three different levels: 1) between primary and secondary steelmaking routes, 2) with traditional, lesser green steel, and 3) with other substitutable materials (e.g. cement and plastics). This paper emphasises the need for steelmakers to adopt a transitional approach in deploying different low-carbon technologies, based on their stage of technological maturity, applicability in certain country contexts, capacity to reduce emissions over time, and the ability of the investment community to support their deployment. We further identify market mechanisms to support green steel production, including carbon border adjustments and public procurement, highlighting a need for implementing a combination of complementary policies to ensure the products’ roll-out. The study further shows that the auto industry is a likely candidate for green steel consumption, where a market would be supported by price premiums paid by willing consumers, such as those of high-end luxury vehicles.

Keywords: green steel, decarbonisation, business model innovation, market analysis

Procedia PDF Downloads 109
1780 Preliminary Study of the Cost-Effectiveness of Green Walls: Analyzing Cases from the Perspective of Life Cycle

Authors: Jyun-Huei Huang, Ting-I Lee

Abstract:

Urban heat island effect is derived from the reduction of vegetative cover by urban development. Because plants can improve air quality and microclimate, green walls have been applied as a sustainable design approach to cool building temperature. By using plants to green vertical surfaces, they decrease room temperature and, as a result, decrease the energy use for air conditioning. Based on their structures, green walls can be divided into two categories, green façades and living walls. A green façade uses the climbing ability of a plant itself, while a living wall assembles planter modules. The latter one is widely adopted in public space, as it is time-effective and less limited. Although a living wall saves energy spent on cooling, it is not necessarily cost-effective from the perspective of a lifecycle analysis. The Italian study shows that the overall benefit of a living wall is only greater than its costs after 47 years of its establishment. In Taiwan, urban greening policies encourage establishment of green walls by referring to their benefits of energy saving while neglecting their low performance on cost-effectiveness. Thus, this research aims at understanding the perception of appliers and consumers on the cost-effectiveness of their living wall products from the lifecycle viewpoint. It adopts semi-structured interviews and field observations on the maintenance of the products. By comparing the two results, it generates insights for sustainable urban greening policies. The preliminary finding shows that stakeholders do not have a holistic sense of lifecycle or cost-effectiveness. Most importantly, a living wall well maintained is often with high input due to the availability of its maintenance budget, and thus less sustainable. In conclusion, without a comprehensive sense of cost-effectiveness throughout a product’s lifecycle, it is very difficult for suppliers and consumers to maintain a living wall system while achieve sustainability.

Keywords: case study, maintenance, post-occupancy evaluation, vertical greening

Procedia PDF Downloads 244
1779 The Selectivities of Pharmaceutical Spending Containment: Social Profit, Incentivization Games and State Power

Authors: Ben Main Piotr Ozieranski

Abstract:

State government spending on pharmaceuticals stands at 1 trillion USD globally, promoting criticism of the pharmaceutical industry's monetization of drug efficacy, product cost overvaluation, and health injustice. This paper elucidates the mechanisms behind a state-institutional response to this problem through the sociological lens of the strategic relational approach to state power. To do so, 30 expert interviews, legal and policy documents are drawn on to explain how state elites in New Zealand have successfully contested a 30-year “pharmaceutical spending containment policy”. Proceeding from Jessop's notion of strategic “selectivity”, encompassing analyses of the enabling features of state actors' ability to harness state structures, a theoretical explanation is advanced. First, a strategic context is described that consists of dynamics around pharmaceutical dealmaking between the state bureaucracy, pharmaceutical pricing strategies (and their effects), and the industry. Centrally, the pricing strategy of "bundling" -deals for packages of drugs that combine older and newer patented products- reflect how state managers have instigated an “incentivization game” that is played by state and industry actors, including HTA professionals, over pharmaceutical products (both current and in development). Second, a protective context is described that is comprised of successive legislative-judicial responses to the strategic context and characterized by the regulation and the societalisation of commercial law. Third, within the policy, the achievement of increased pharmaceutical coverage (pharmaceutical “mix”) alongside contained spending is conceptualized as a state defence of a "social profit". As such, in contrast to scholarly expectations that political and economic cultures of neo-liberalism drive pharmaceutical policy-making processes, New Zealand's state elites' approach is shown to be antipathetic to neo-liberals within an overall capitalist economy. The paper contributes an analysis of state pricing strategies and how they are embedded in state regulatory structures. Additionally, through an analysis of the interconnections of state power and pharmaceutical value Abrahams's neo-liberal corporate bias model for pharmaceutical policy analysis is problematised.

Keywords: pharmaceutical governance, pharmaceutical bureaucracy, pricing strategies, state power, value theory

Procedia PDF Downloads 51
1778 Shared Decision Making in Oropharyngeal Cancer: The Development of a Decision Aid for Resectable Oropharyngeal Carcinoma, a Mixed Methods Study

Authors: Anne N. Heirman, Lisette van der Molen, Richard Dirven, Gyorgi B. Halmos, Michiel W.M. van den Brekel

Abstract:

Background: Due to the rising incidence of oropharyngeal squamous cell cancer (OPSCC), many patients are challenged with choosing between transoral(robotic) surgery and radiotherapy, with equal survival and oncological outcomes. Also, functional outcomes are of little difference over the years. With this study, the wants and needs of patients and caregivers are identified to develop a comprehensible patient decision aid (PDA). Methods: The development of this PDA is based on the International Patient Decision Aid Standards criteria. In phase 1, relevant literature was reviewed and compared to current counseling papers. We interviewed ten post-treatment patients and ten doctors from four head and neck centers in the Netherlands, which were transcribed verbatim and analyzed. With these results, the first draft of the PDA was developed. Phase 2 beholds testing the first draft for comprehensibility and usability. Phase 3 beholds testing for feasibility. After this phase, the final version of the PDA was developed. Results: All doctors and patients agreed a PDA was needed. Phase 1 showed that 50% of patients felt well-informed after standard care and 35% missed information about treatment possibilities. Side effects and functional outcomes were rated as the most important for decision-making. With this information, the first version was developed. Doctors and patients stated (phase 2) that they were satisfied with the comprehensibility and usability, but there was too much text. The PDA underwent text reduction revisions and got more graphics. After revisions, all doctors found the PDA feasible and would contribute to regular counseling. Patients were satisfied with the results and wished they would have seen it before their treatment. Conclusion: Decision-making for OPSCC should focus on differences in side-effects and functional outcomes. Patients and doctors found the PDA to be of great value. Future research will explore the benefits of the PDA in clinical practice.

Keywords: head-and-neck oncology, oropharyngeal cancer, patient decision aid, development, shared decision making

Procedia PDF Downloads 130
1777 Towards Learning Query Expansion

Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier

Abstract:

The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.

Keywords: supervised leaning, classification, query expansion, association rules

Procedia PDF Downloads 302