Search results for: store patronage
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 445

Search results for: store patronage

55 Application of Fatty Acid Salts for Antimicrobial Agents in Koji-Muro

Authors: Aya Tanaka, Mariko Era, Shiho Sakai, Takayoshi Kawahara, Takahide Kanyama, Hiroshi Morita

Abstract:

Objectives: Aspergillus niger and Aspergillus oryzae are used as koji fungi in the spot of the brewing. Since koji-muro (room for making koji) was a low level of airtightness, microbial contamination has long been a concern to the alcoholic beverage production. Therefore, we focused on the fatty acid salt which is the main component of soap. Fatty acid salts have been reported to show some antibacterial and antifungal activity. So this study examined antimicrobial activities against Aspergillus and Bacillus spp. This study aimed to find the effectiveness of the fatty acid salt in koji-muro as antimicrobial agents. Materials & Methods: A. niger NBRC 31628, A. oryzae NBRC 5238, A. oryzae (Akita Konno store) and Bacillus subtilis NBRC 3335 were chosen as tested. Nine fatty acid salts including potassium butyrate (C4K), caproate (C6K), caprylate (C8K), caprate (C10K), laurate (C12K), myristate (C14K), oleate (C18:1K), linoleate (C18:2K) and linolenate (C18:3K) at 350 mM and pH 10.5 were used as antimicrobial activity. FASs and spore suspension were prepared in plastic tubes. The spore suspension of each fungus (3.0×104 spores/mL) or the bacterial suspension (3.0×105 CFU/mL) was mixed with each of the fatty acid salts (final concentration of 175 mM). The mixtures were incubated at 25 ℃. Samples were counted at 0, 10, 60, and 180 min by plating (100 µL) on potato dextrose agar. Fungal and bacterial colonies were counted after incubation for 1 or 2 days at 30 ℃. The MIC (minimum inhibitory concentration) is defined as the lowest concentration of drug sufficient for inhibiting visible growth of spore after 10 min of incubation. MICs against fungi and bacteria were determined using the two-fold dilution method. Each fatty acid salt was separately inoculated with 400 µL of Aspergillus spp. or B. subtilis NBRC 3335 at 3.0 × 104 spores/mL or 3.0 × 105 CFU/mL. Results: No obvious change was observed in tested fatty acid salts against A. niger and A. oryzae. However, C12K was the antibacterial effect of 5 log-unit incubated time for 10 min against B. subtilis. Thus, C12K suppressed 99.999 % of bacterial growth. Besides, C10K was the antibacterial effect of 5 log-unit incubated time for 180 min against B. subtilis. C18:1K, C18:2K and C18:3K was the antibacterial effect of 5 log-unit incubated time for 10 min against B. subtilis. However, compared to saturated fatty acid salts to unsaturated fatty acid salts, saturated fatty acid salts are lower cost. These results suggest C12K has potential in the field of koji-muro. It is necessary to evaluate the antimicrobial activity against other fungi and bacteria, in the future.

Keywords: Aspergillus, antimicrobial, fatty acid salts, koji-muro

Procedia PDF Downloads 523
54 Data Analysis Tool for Predicting Water Scarcity in Industry

Authors: Tassadit Issaadi Hamitouche, Nicolas Gillard, Jean Petit, Valerie Lavaste, Celine Mayousse

Abstract:

Water is a fundamental resource for the industry. It is taken from the environment either from municipal distribution networks or from various natural water sources such as the sea, ocean, rivers, aquifers, etc. Once used, water is discharged into the environment, reprocessed at the plant or treatment plants. These withdrawals and discharges have a direct impact on natural water resources. These impacts can apply to the quantity of water available, the quality of the water used, or to impacts that are more complex to measure and less direct, such as the health of the population downstream from the watercourse, for example. Based on the analysis of data (meteorological, river characteristics, physicochemical substances), we wish to predict water stress episodes and anticipate prefectoral decrees, which can impact the performance of plants and propose improvement solutions, help industrialists in their choice of location for a new plant, visualize possible interactions between companies to optimize exchanges and encourage the pooling of water treatment solutions, and set up circular economies around the issue of water. The development of a system for the collection, processing, and use of data related to water resources requires the functional constraints specific to the latter to be made explicit. Thus the system will have to be able to store a large amount of data from sensors (which is the main type of data in plants and their environment). In addition, manufacturers need to have 'near-real-time' processing of information in order to be able to make the best decisions (to be rapidly notified of an event that would have a significant impact on water resources). Finally, the visualization of data must be adapted to its temporal and geographical dimensions. In this study, we set up an infrastructure centered on the TICK application stack (for Telegraf, InfluxDB, Chronograf, and Kapacitor), which is a set of loosely coupled but tightly integrated open source projects designed to manage huge amounts of time-stamped information. The software architecture is coupled with the cross-industry standard process for data mining (CRISP-DM) data mining methodology. The robust architecture and the methodology used have demonstrated their effectiveness on the study case of learning the level of a river with a 7-day horizon. The management of water and the activities within the plants -which depend on this resource- should be considerably improved thanks, on the one hand, to the learning that allows the anticipation of periods of water stress, and on the other hand, to the information system that is able to warn decision-makers with alerts created from the formalization of prefectoral decrees.

Keywords: data mining, industry, machine Learning, shortage, water resources

Procedia PDF Downloads 93
53 Renewable Natural Gas Production from Biomass and Applications in Industry

Authors: Sarah Alamolhoda, Kevin J. Smith, Xiaotao Bi, Naoko Ellis

Abstract:

For millennials, biomass has been the most important source of fuel used to produce energy. Energy derived from biomass is renewable by re-growth of biomass. Various technologies are used to convert biomass to potential renewable products including combustion, gasification, pyrolysis and fermentation. Gasification is the incomplete combustion of biomass in a controlled environment that results in valuable products such as syngas, biooil and biochar. Syngas is a combustible gas consisting of hydrogen (H₂), carbon monoxide (CO), carbon dioxide (CO₂), and traces of methane (CH₄) and nitrogen (N₂). Cleaned syngas can be used as a turbine fuel to generate electricity, raw material for hydrogen and synthetic natural gas production, or as the anode gas of solid oxide fuel cells. In this work, syngas as a product of woody biomass gasification in British Columbia, Canada, was introduced to two consecutive fixed bed reactors to perform a catalytic water gas shift reaction followed by a catalytic methanation reaction. The water gas shift reaction is a well-established industrial process and used to increase the hydrogen content of the syngas before the methanation process. Catalysts were used in the process since both reactions are reversible exothermic, and thermodynamically preferred at lower temperatures while kinetically favored at elevated temperatures. The water gas shift reactor and the methanation reactor were packed with Cu-based catalyst and Ni-based catalyst, respectively. Simulated syngas with different percentages of CO, H₂, CH₄, and CO₂ were fed to the reactors to investigate the effect of operating conditions in the unit. The water gas shift reaction experiments were done in the temperature of 150 ˚C to 200 ˚C, and the pressure of 550 kPa to 830 kPa. Similarly, methanation experiments were run in the temperature of 300 ˚C to 400 ˚C, and the pressure of 2340 kPa to 3450 kPa. The Methanation reaction reached 98% of CO conversion at 340 ˚C and 3450 kPa, in which more than half of CO was converted to CH₄. Increasing the reaction temperature caused reduction in the CO conversion and increase in the CH₄ selectivity. The process was designed to be renewable and release low greenhouse gas emissions. Syngas is a clean burning fuel, however by going through water gas shift reaction, toxic CO was removed, and hydrogen as a green fuel was produced. Moreover, in the methanation process, the syngas energy was transformed to a fuel with higher energy density (per volume) leading to reduction in the amount of required fuel that flows through the equipment and improvement in the process efficiency. Natural gas is about 3.5 times more efficient (energy/ volume) than hydrogen and easier to store and transport. When modification of existing infrastructure is not practical, the partial conversion of renewable hydrogen to natural gas (with up to 15% hydrogen content), the efficiency would be preserved while greenhouse gas emission footprint is eliminated.

Keywords: renewable natural gas, methane, hydrogen, gasification, syngas, catalysis, fuel

Procedia PDF Downloads 75
52 Preservation and Promotion of Lao Traditional Food as Luangprabang Province Unique Culture and Tradition in Accordance With One District One Product Policy

Authors: Lamphong Volady

Abstract:

The primary purpose of this study was to explore the traditional cuisine (local food) of Luangprabang Province in line with the Lao PDR’s One District One Product Policy. Another purpose of the study was to examine channels used to present local food, reasons to preserve and promote local food, as well as local food preservation and promotion strategies. It also aimed at testing correlation hypotheses whether there is a statistically significant relationship between enjoyment of having local food and willingness to promote local cuisines becoming international cuisines, attractiveness to consume local food, preservation and promotion of local food problems, and local people’s occupations. The Convergent Parallel Mixed Methods were employed in this study. The results of the study showed that several local cuisines were found to be local food of Luangprabang Province, namely Jeow Bon (Chilli dipping suace), Or Lam or aw lahm (stew buffalo skin, herbs, Mai sakaan), Kai Pan (River Weed Dry), Tam Mak Houng Luangprabang (Papaya Salad), Nang (Yam Buffalo Skin Dry), Sai Oor (Sausage), Laap Sin Koay Sai Mar-Keua Pao (Beef Salad with Roasted Eggplants), Orm Born (Taro leaves Stew), Oor Nor Mai (Bamboo Shoot Sausage), Jeow Nam Poo (Pickled Crab Chillies), Mok Dok Kae (steaming or roasting a Dok Kae Wrapp), Nor Sa Wan, Kao Noom Kee Noo, Kao Noom Ba Bin. It also depicted that YouTube, Facebook, and TikTok were multiple social channels or platforms which were found to be used to introduce traditional food as well as television, smartphone, word of mouth, Lao food fairs and other provincial events. The study also found that local food should be preserved and promoted since traditional food is not only ancestral, ancient, traditional, and local cuisines, but it is also wisdom, unique, and national cuisine. The study also found that people feel attracted to consuming local food because local food is delicious, unique, clean, nutritious, non-contaminated and natural. The study showed that lack of funds to produce local food, inadequate draw materials, lack material to store products, insufficient place to produce and lack of related organizations engagement were found to be problems for preserving and promoting traditional food. Finally, the result of the study revealed that there is a statistically significant weak relationship between enjoyment of having local food and willingness to promote local cuisines becoming international cuisines (R²= 4.5%), (p-value <0.001). There is a statistically significant moderate relationship between enjoyment of having local food and attractiveness to consume local food (R²= 7.8%), (p-value <0.001). However, there is a statistically insignificant relationship between enjoyment of having local food and preservation and promotion of local food problems (R²= 1.8%), (p-value = 0.086). It was found that there is a statistically insignificant relationship between enjoyment of having local food and local people’s occupations (R²= 0.0%), (p-value = 0.929).

Keywords: local food, preservation, promotion, traditional food, cuisines

Procedia PDF Downloads 41
51 Telemedicine Services in Ophthalmology: A Review of Studies

Authors: Nasim Hashemi, Abbas Sheikhtaheri

Abstract:

Telemedicine is the use of telecommunication and information technologies to provide health care services that would often not be consistently available in distant rural communities to people at these remote areas. Teleophthalmology is a branch of telemedicine that delivers eye care through digital medical equipment and telecommunications technology. Thus, teleophthalmology can overcome geographical barriers and improve quality, access, and affordability of eye health care services. Since teleophthalmology has been widespread applied in recent years, the aim of this study was to determine the different applications of teleophthalmology in the world. To this end, three bibliographic databases (Medline, ScienceDirect, Scopus) were comprehensively searched with these keywords: eye care, eye health care, primary eye care, diagnosis, detection, and screening of different eye diseases in conjunction with telemedicine, telehealth, teleophthalmology, e-services, and information technology. All types of papers were included in the study with no time restriction. We conducted the search strategies until 2015. Finally 70 articles were surveyed. We classified the results based on the’type of eye problems covered’ and ‘the type of telemedicine services’. Based on the review, from the ‘perspective of health care levels’, there are three level for eye health care as primary, secondary and tertiary eye care. From the ‘perspective of eye care services’, the main application of teleophthalmology in primary eye care was related to the diagnosis of different eye diseases such as diabetic retinopathy, macular edema, strabismus and aged related macular degeneration. The main application of teleophthalmology in secondary and tertiary eye care was related to the screening of eye problems i.e. diabetic retinopathy, astigmatism, glaucoma screening. Teleconsultation between health care providers and ophthalmologists and also education and training sessions for patients were other types of teleophthalmology in world. Real time, store–forward and hybrid methods were the main forms of the communication from the perspective of ‘teleophthalmology mode’ which is used based on IT infrastructure between sending and receiving centers. In aspect of specialists, early detection of serious aged-related ophthalmic disease in population, screening of eye disease processes, consultation in an emergency cases and comprehensive eye examination were the most important benefits of teleophthalmology. Cost-effectiveness of teleophthalmology projects resulted from reducing transportation and accommodation cost, access to affordable eye care services and receiving specialist opinions were also the main advantages of teleophthalmology for patients. Teleophthalmology brings valuable secondary and tertiary care to remote areas. So, applying teleophthalmology for detection, treatment and screening purposes and expanding its use in new applications such as eye surgery will be a key tool to promote public health and integrating eye care to primary health care.

Keywords: applications, telehealth, telemedicine, teleophthalmology

Procedia PDF Downloads 342
50 Coastal Resources Spatial Planning and Potential Oil Risk Analysis: Case Study of Misratah’s Coastal Resources, Libya

Authors: Abduladim Maitieg, Kevin Lynch, Mark Johnson

Abstract:

The goal of the Libyan Environmental General Authority (EGA) and National Oil Corporation (Department of Health, Safety & Environment) during the last 5 years has been to adopt a common approach to coastal and marine spatial planning. Protection and planning of the coastal zone is a significant for Libya, due to the length of coast and, the high rate of oil export, and spills’ potential negative impacts on coastal and marine habitats. Coastal resource scenarios constitute an important tool for exploring the long-term and short-term consequences of oil spill impact and available response options that would provide an integrated perspective on mitigation. To investigate that, this paper reviews the Misratah coastal parameters to present the physical and human controls and attributes of coastal habitats as the first step in understanding how they may be damaged by an oil spill. This paper also investigates costal resources, providing a better understanding of the resources and factors that impact the integrity of the ecosystem. Therefore, the study described the potential spatial distribution of oil spill risk and the coastal resources value, and also created spatial maps of coastal resources and their vulnerability to oil spills along the coast. This study proposes an analysis of coastal resources condition at a local level in the Misratah region of the Mediterranean Sea, considering the implementation of coastal and marine spatial planning over time as an indication of the will to manage urban development. Oil spill contamination analysis and their impact on the coastal resources depend on (1) oil spill sequence, (2) oil spill location, (3) oil spill movement near the coastal area. The resulting maps show natural, socio-economic activity, environmental resources along of the coast, and oil spill location. Moreover, the study provides significant geodatabase information which is required for coastal sensitivity index mapping and coastal management studies. The outcome of study provides the information necessary to set an Environmental Sensitivity Index (ESI) for the Misratah shoreline, which can be used for management of coastal resources and setting boundaries for each coastal sensitivity sectors, as well as to help planners measure the impact of oil spills on coastal resources. Geographic Information System (GIS) tools were used in order to store and illustrate the spatial convergence of existing socio-economic activities such as fishing, tourism, and the salt industry, and ecosystem components such as sea turtle nesting area, Sabkha habitats, and migratory birds feeding sites. These geodatabases help planners investigate the vulnerability of coastal resources to an oil spill.

Keywords: coastal and marine spatial planning advancement training, GIS mapping, human uses, ecosystem components, Misratah coast, Libyan, oil spill

Procedia PDF Downloads 336
49 Antimicrobial Activity of Fatty Acid Salts against Microbes for Food Safety

Authors: Aya Tanaka, Mariko Era, Manami Masuda, Yui Okuno, Takayoshi Kawahara, Takahide Kanyama, Hiroshi Morita

Abstract:

Objectives— Fungi and bacteria are present in a wide range of natural environments. They are breed in the foods such as vegetables and fruit, causing corruption and deterioration of these foods in some cases. Furthermore, some species of fungi and bacteria are known to cause food intoxication or allergic reactions in some individuals. To prevent fungal and bacterial contamination, various fungicides and bactericidal have been developed that inhibit fungal and bacterial growth. Fungicides and bactericides must show high antifungal and antibacterial activity, sustainable activity, and a high degree of safety. Therefore, we focused on the fatty acid salt which is the main component of soap. We focused on especially C10K and C12K. This study aimed to find the effectiveness of the fatty acid salt as antimicrobial agents for food safety. Materials and Methods— Cladosporium cladosporioides NBRC 30314, Penicillium pinophilum NBRC 6345, Aspergillus oryzae (Akita Konno store), Rhizopus oryzae NBRC 4716, Fusarium oxysporum NBRC 31631, Escherichia coli NBRC 3972, Bacillus subtilis NBRC 3335, Staphylococcus aureus NBRC 12732, Pseudomonas aenuginosa NBRC 13275 and Serratia marcescens NBRC 102204 were chosen as tested fungi and bacteria. Hartmannella vermiformis NBRC 50599 and Acanthamoeba castellanii NBRC 30010 were chosen as tested amoeba. Nine fatty acid salts including potassium caprate (C10K) and laurate (C12K) at 350 mM and pH 10.5 were used as antifungal activity. The spore suspension of each fungus (3.0×10⁴ spores/mL) or the bacterial suspension (3.0×10⁵ or 3.0×10⁶ or 3.0×10⁷ CFU/mL) was mixed with each of the fatty acid salts (final concentration of 175 mM). Samples were counted at 0, 10, 60, and 180 min by plating (100 µL) on potato dextrose agar or nutrient agar. Fungal and bacterial colonies were counted after incubation for 1 or 2 days at 30 °C. Results— C10K was antifungal activity of 4 log-unit incubated time for 10 min against fungi other than A. oryzae. C12K was antifungal activity of 4 log-unit incubated time for 10 min against fungi other than P. pinophilum and A. oryzae. C10K and C12K did not show high anti-yeast activity. C10K was antibacterial activity of 6 or 7 log-unit incubated time for 10 min against bacteria other than B. subtilis. C12K was antibacterial activity of 5 to 7 log-unit incubated time for 10 min against bacteria other than S. marcescens. C12K was anti-amoeba activity of 4 log-unit incubated time for 10 min against H. vermiformis. These results suggest C10K and C12K have potential in the field of food safety.

Keywords: food safety, microbes, antimicrobial, fatty acid salts

Procedia PDF Downloads 457
48 Evaluation of Iron Application Method to Remediate Coastal Marine Sediment

Authors: Ahmad Seiar Yasser

Abstract:

Sediment is an important habitat for organisms and act as a store house for nutrients in aquatic ecosystems. Hydrogen sulfide is produced by microorganisms in the water columns and sediments, which is highly toxic and fatal to benthic organisms. However, the irons have the capacity to regulate the formation of sulfide by poising the redox sequence and to form insoluble iron sulfide and pyrite compounds. Therefore, we conducted two experiments aimed to evaluate the remediation efficiency of iron application to organically enrich and improve sediments environment. Experiments carried out in the laboratory using intact sediment cores taken from Mikawa Bay, Japan at every month from June to September 2017 and October 2018. In Experiment 1, after cores were collected, the iron powder or iron hydroxide were applied to the surface sediment with 5 g/ m2 or 5.6 g/ m2, respectively. In Experiment 2, we experimentally investigated the removal of hydrogen sulfide using (2mm or less and 2 to 5mm) of the steelmaking slag. Experiments are conducted both in the laboratory with the same boundary conditions. The overlying water were replaced with deoxygenated filtered seawater, and cores were sealed a top cap to keep anoxic condition with a stirrer to circulate the overlying water gently. The incubation experiments have been set in three treatments included the control, and each treatment replicated and were conducted with the same temperature of the in-situ conditions. Water samples were collected to measure the dissolved sulfide concentrations in the overlying water at appropriate time intervals by the methylene blue method. Sediment quality was also analyzed after the completion of the experiment. After the 21 days incubation, experimental results using iron powder and ferric hydroxide revealed that application of these iron containing materials significantly reduced sulfide release flux from the sediment into the overlying water. The average dissolved sulfides concentration in the overlying water of the treatment group was significantly decrease (p = .0001). While no significant difference was observed between the control group after 21 day incubation. Therefore, the application of iron to the sediment is a promising method to remediate contaminated sediments in a eutrophic water body, although ferric hydroxide has better hydrogen sulfide removal effects. Experiments using the steelmaking slag also clarified the fact that capping with (2mm or less and 2 to 5mm) of slag steelmaking is an effective technique for remediation of bottom sediments enriched organic containing hydrogen sulfide because it leads to the induction of chemical reaction between Fe and sulfides occur in sediments which did not occur in conditions naturally. Although (2mm or less) of slag steelmaking has better hydrogen sulfide removal effects. Because of economic reasons, the application of steelmaking slag to the sediment is a promising method to remediate contaminated sediments in the eutrophic water body.

Keywords: sedimentary, H2S, iron, iron hydroxide

Procedia PDF Downloads 137
47 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data

Authors: Martin Pellon Consunji

Abstract:

Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.

Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms

Procedia PDF Downloads 93
46 The Pioneering Model in Teaching Arabic as a Mother Tongue through Modern Innovative Strategies

Authors: Rima Abu Jaber Bransi, Rawya Jarjoura Burbara

Abstract:

This study deals with two pioneering approaches in teaching Arabic as a mother tongue: first, computerization of literary and functional texts in the mother tongue; second, the pioneering model in teaching writing skills by computerization. The significance of the study lies in its treatment of a serious problem that is faced in the era of technology, which is the widening gap between the pupils and their mother tongue. The innovation in the study is that it introduces modern methods and tools and a pioneering instructional model that turns the process of mother tongue teaching into an effective, meaningful, interesting and motivating experience. In view of the Arabic language diglossia, standard Arabic and spoken Arabic, which constitutes a serious problem to the pupil in understanding unused words, and in order to bridge the gap between the pupils and their mother tongue, we resorted to computerized techniques; we took texts from the pre-Islamic period (Jahiliyya), starting with the Mu'allaqa of Imru' al-Qais and other selected functional texts and computerized them for teaching in an interesting way that saves time and effort, develops high thinking strategies, expands the literary good taste among the pupils, and gives the text added values that neither the book, the blackboard, the teacher nor the worksheets provide. On the other hand, we have developed a pioneering computerized model that aims to develop the pupil's ability to think, to provide his imagination with the elements of growth, invention and connection, and motivate him to be creative, and raise level of his scores and scholastic achievements. The model consists of four basic stages in teaching according to the following order: 1. The Preparatory stage, 2. The reading comprehension stage, 3. The writing stage, 4. The evaluation stage. Our lecture will introduce a detailed description of the model with illustrations and samples from the units that we built through highlighting some aspects of the uniqueness and innovation that are specific to this model and the different integrated tools and techniques that we developed. One of the most significant conclusions of this research is that teaching languages through the employment of new computerized strategies is very likely to get the Arabic speaking pupils out of the circle of passive reception into active and serious action and interaction. The study also emphasizes the argument that the computerized model of teaching can change the role of the pupil's mind from being a store of knowledge for a short time into a partner in producing knowledge and storing it in a coherent way that prevents its forgetfulness and keeping it in memory for a long period of time. Consequently, the learners also turn into partners in evaluation by expressing their views, giving their notes and observations, and application of the method of peer-teaching and learning.

Keywords: classical poetry, computerization, diglossia, writing skill

Procedia PDF Downloads 203
45 Effect of Multi-Walled Carbon Nanotubes on Fuel Cell Membrane Performance

Authors: Rabindranath Jana, Biswajit Maity, Keka Rana

Abstract:

The most promising clean energy source is the fuel cell, since it does not generate toxic gases and other hazardous compounds. Again the direct methanol fuel cell (DMFC) is more user-friendly as it is easy to be miniaturized and suited as energy source for automobiles as well as domestic applications and portable devices. And unlike the hydrogen used for some fuel cells, methanol is a liquid that is easy to store and transport in conventional tanks. The most important part of a fuel cell is its membrane. Till now, an overall efficiency for a methanol fuel cell is reported to be about 20 ~ 25%. The lower efficiency of the cell may be due to the critical factors, e.g. slow reaction kinetics at the anode and methanol crossover. The oxidation of methanol is composed of a series of successive reactions creating formaldehyde and formic acid as intermediates that contribute to slow reaction rates and decreased cell voltage. Currently, the investigation of new anode catalysts to improve oxidation reaction rates is an active area of research as it applies to the methanol fuel cell. Surprisingly, there are very limited reports on nanostructured membranes, which are rather simple to manufacture with different tuneable compositions and are expected to allow only the proton permeation but not the methanol due to their molecular sizing effects and affinity to the membrane surface. We have developed a nanostructured fuel cell membrane from polydimethyl siloxane rubber (PDMS), ethylene methyl co-acrylate (EMA) and multi-walled carbon nanotubes (MWNTs). The effect of incorporating different proportions of f-MWNTs in polymer membrane has been studied. The introduction of f-MWNTs in polymer matrix modified the polymer structure, and therefore the properties of the device. The proton conductivity, measured by an AC impedance technique using open-frame and two-electrode cell and methanol permeability of the membranes was found to be dependent on the f-MWNTs loading. The proton conductivity of the membranes increases with increase in concentration of f-MWNTs concentration due to increased content of conductive materials. Measured methanol permeabilities at 60oC were found to be dependant on loading of f-MWNTs. The methanol permeability decreased from 1.5 x 10-6 cm²/s for pure film to 0.8 x 10-7 cm²/s for a membrane containing 0.5wt % f-MWNTs. This is due to increasing proportion of f-MWNTs, the matrix becomes more compact. From DSC melting curves it is clear that the polymer matrix with f-MWNTs is thermally stable. FT-IR studies show good interaction between EMA and f-MWNTs. XRD analysis shows good crystalline behavior of the prepared membranes. Significant cost savings can be achieved when using the blended films which contain less expensive polymers.

Keywords: fuel cell membrane, polydimethyl siloxane rubber, carbon nanotubes, proton conductivity, methanol permeability

Procedia PDF Downloads 389
44 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web

Authors: Aayushi Somani, Siba P. Samal

Abstract:

Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.

Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR

Procedia PDF Downloads 140
43 Excess Body Fat as a Store Toxin Affecting the Glomerular Filtration and Excretory Function of the Liver in Patients after Renal Transplantation

Authors: Magdalena B. Kaziuk, Waldemar Kosiba, Marek J. Kuzniewski

Abstract:

Introduction: Adipose tissue is a typical place for storage water-insoluble toxins in the body. It's connective tissue, where the intercellular substance consist of fat, which level in people with low physical activity should be 18-25% for women and 13-18% for men. Due to the fat distribution in the body we distinquish two types of obesity: android (visceral, abdominal) and gynoidal (gluteal-femoral, peripheral). Abdominal obesity increases the risk of complications of the cardiovascular system diseases, and impaired renal and liver function. Through the influence on disorders of metabolism, lipid metabolism, diabetes and hypertension, leading to emergence of the metabolic syndrome. So thus, obesity will especially overload kidney function in patients after transplantation. Aim: An attempt was made to estimate the impact of amount fat tissue on transplanted kidney function and excretory function of the liver in patients after Ktx. Material and Methods: The study included 108 patients (50 females, 58 male, age 46.5 +/- 12.9 years) with active kidney transplant after more than 3 months from the transplantation. An analysis of body composition was done by using electrical bioimpedance (BIA) and anthropometric measurements. Estimated basal metabolic rate (BMR), muscle mass, total body water content and the amount of body fat. Information about physical activity were obtained during clinical examination. Nutritional status, and type of obesity were determined by using indicators: Waist to Height Ratio (WHR) and Waist to Hip Ratio (WHR). Excretory functions of the transplanted kidney was rated by calculating the estimated renal glomerular filtration rate (eGFR) using the MDRD formula. Liver function was rated by total bilirubin and alanine aminotransferase levels ALT concentration in serum. In our patients haemolitic uremic syndrome (HUS) was excluded. Results: In 19.44% of patients had underweight, 22.37% of the respondents were with normal weight, 11.11% had overweight, and the rest were with obese (49.08%). People with android stature have a lower eGFR compared with those with the gynoidal stature (p = 0.004). All patients with obesity had higher amount of body fat from a few to several percent. The higher amount of body fat percentage, the lower eGFR had patients (p <0.001). Elevated ALT levels significantly correlated with a high fat content (p <0.02). Conclusion: Increased amount of body fat, particularly in the case of android obesity can be a predictor of kidney and liver damage. Due to that obese patients should have more frequent control of diagnostic functions of these organs and the intensive dietary proceedings, pharmacological and regular physical activity adapted to the current physical condition of patients after transplantation.

Keywords: obesity, body fat, kidney transplantation, glomerular filtration rate, liver function

Procedia PDF Downloads 436
42 The Use of Geographic Information System Technologies for Geotechnical Monitoring of Pipeline Systems

Authors: A. G. Akhundov

Abstract:

Issues of obtaining unbiased data on the status of pipeline systems of oil- and oil product transportation become especially important when laying and operating pipelines under severe nature and climatic conditions. The essential attention is paid here to researching exogenous processes and their impact on linear facilities of the pipeline system. Reliable operation of pipelines under severe nature and climatic conditions, timely planning and implementation of compensating measures are only possible if operation conditions of pipeline systems are regularly monitored, and changes of permafrost soil and hydrological operation conditions are accounted for. One of the main reasons for emergency situations to appear is the geodynamic factor. Emergency situations are proved by the experience to occur within areas characterized by certain conditions of the environment and to develop according to similar scenarios depending on active processes. The analysis of natural and technical systems of main pipelines at different stages of monitoring gives a possibility of making a forecast of the change dynamics. The integration of GIS technologies, traditional means of geotechnical monitoring (in-line inspection, geodetic methods, field observations), and remote methods (aero-visual inspection, aero photo shooting, air and ground laser scanning) provides the most efficient solution of the problem. The united environment of geo information system (GIS) is a comfortable way to implement the monitoring system on the main pipelines since it provides means to describe a complex natural and technical system and every element thereof with any set of parameters. Such GIS enables a comfortable simulation of main pipelines (both in 2D and 3D), the analysis of situations and selection of recommendations to prevent negative natural or man-made processes and to mitigate their consequences. The specifics of such systems include: a multi-dimensions simulation of facilities in the pipeline system, math modelling of the processes to be observed, and the use of efficient numeric algorithms and software packets for forecasting and analyzing. We see one of the most interesting possibilities of using the monitoring results as generating of up-to-date 3D models of a facility and the surrounding area on the basis of aero laser scanning, data of aerophotoshooting, and data of in-line inspection and instrument measurements. The resulting 3D model shall be the basis of the information system providing means to store and process data of geotechnical observations with references to the facilities of the main pipeline; to plan compensating measures, and to control their implementation. The use of GISs for geotechnical monitoring of pipeline systems is aimed at improving the reliability of their operation, reducing the probability of negative events (accidents and disasters), and at mitigation of consequences thereof if they still are to occur.

Keywords: databases, 3D GIS, geotechnical monitoring, pipelines, laser scaning

Procedia PDF Downloads 162
41 Direct Current Grids in Urban Planning for More Sustainable Urban Energy and Mobility

Authors: B. Casper

Abstract:

The energy transition towards renewable energies and drastically reduced carbon dioxide emissions in Germany drives multiple sectors into a transformation process. Photovoltaic and on-shore wind power are predominantly feeding in the low and medium-voltage grids. The electricity grid is not laid out to allow an increasing feed-in of power in low and medium voltage grids. Electric mobility is currently in the run-up phase in Germany and still lacks a significant amount of charging stations. The additional power demand by e-mobility cannot be supplied by the existing electric grids in most cases. The future demands in heating and cooling of commercial and residential buildings are increasingly generated by heat-pumps. Yet the most important part in the energy transition is the storage of surplus energy generated by photovoltaic and wind power sources. Water electrolysis is one way to store surplus energy known as power-to-gas. With the vehicle-to-grid technology, the upcoming fleet of electric cars could be used as energy storage to stabilize the grid. All these processes use direct current (DC). The demand of bi-directional flow and higher efficiency in the future grids can be met by using DC. The Flexible Electrical Networks (FEN) research campus at RWTH Aachen investigates interdisciplinary about the advantages, opportunities, and limitations of DC grids. This paper investigates the impact of DC grids as a technological innovation on the urban form and urban life. Applying explorative scenario development, analyzation of mapped open data sources on grid networks and research-by-design as a conceptual design method, possible starting points for a transformation to DC medium voltage grids could be found. Several fields of action have emerged in which DC technology could become a catalyst for future urban development: energy transition in urban areas, e-mobility, and transformation of the network infrastructure. The investigation shows a significant potential to increase renewable energy production within cities with DC grids. The charging infrastructure for electric vehicles will predominantly be using DC in the future because fast and ultra fast charging can only be achieved with DC. Our research shows that e-mobility, combined with autonomous driving has the potential to change the urban space and urban logistics fundamentally. Furthermore, there are possible win-win-win solutions for the municipality, the grid operator and the inhabitants: replacing overhead transmission lines by underground DC cables to open up spaces in contested urban areas can lead to a positive example of how the energy transition can contribute to a more sustainable urban structure. The outlook makes clear that target grid planning and urban planning will increasingly need to be synchronized.

Keywords: direct current, e-mobility, energy transition, grid planning, renewable energy, urban planning

Procedia PDF Downloads 98
40 Acoustic Energy Harvesting Using Polyvinylidene Fluoride (PVDF) and PVDF-ZnO Piezoelectric Polymer

Authors: S. M. Giripunje, Mohit Kumar

Abstract:

Acoustic energy that exists in our everyday life and environment have been overlooked as a green energy that can be extracted, generated, and consumed without any significant negative impact to the environment. The harvested energy can be used to enable new technology like wireless sensor networks. Technological developments in the realization of truly autonomous MEMS devices and energy storage systems have made acoustic energy harvesting (AEH) an increasingly viable technology. AEH is the process of converting high and continuous acoustic waves from the environment into electrical energy by using an acoustic transducer or resonator. AEH is not popular as other types of energy harvesting methods since sound waves have lower energy density and such energy can only be harvested in very noisy environment. However, the energy requirements for certain applications are also correspondingly low and also there is a necessity to observe the noise to reduce noise pollution. So the ability to reclaim acoustic energy and store it in a usable electrical form enables a novel means of supplying power to relatively low power devices. A quarter-wavelength straight-tube acoustic resonator as an acoustic energy harvester is introduced with polyvinylidene fluoride (PVDF) and PVDF doped with ZnO nanoparticles, piezoelectric cantilever beams placed inside the resonator. When the resonator is excited by an incident acoustic wave at its first acoustic eigen frequency, an amplified acoustic resonant standing wave is developed inside the resonator. The acoustic pressure gradient of the amplified standing wave then drives the vibration motion of the PVDF piezoelectric beams, generating electricity due to the direct piezoelectric effect. In order to maximize the amount of the harvested energy, each PVDF and PVDF-ZnO piezoelectric beam has been designed to have the same structural eigen frequency as the acoustic eigen frequency of the resonator. With a single PVDF beam placed inside the resonator, the harvested voltage and power become the maximum near the resonator tube open inlet where the largest acoustic pressure gradient vibrates the PVDF beam. As the beam is moved to the resonator tube closed end, the voltage and power gradually decrease due to the decreased acoustic pressure gradient. Multiple piezoelectric beams PVDF and PVDF-ZnO have been placed inside the resonator with two different configurations: the aligned and zigzag configurations. With the zigzag configuration which has the more open path for acoustic air particle motions, the significant increases in the harvested voltage and power have been observed. Due to the interruption of acoustic air particle motion caused by the beams, it is found that placing PVDF beams near the closed tube end is not beneficial. The total output voltage of the piezoelectric beams increases linearly as the incident sound pressure increases. This study therefore reveals that the proposed technique used to harvest sound wave energy has great potential of converting free energy into useful energy.

Keywords: acoustic energy, acoustic resonator, energy harvester, eigenfrequency, polyvinylidene fluoride (PVDF)

Procedia PDF Downloads 349
39 Evaluation of the Effectiveness of Crisis Management Support Bases in Tehran

Authors: Sima Hajiazizi

Abstract:

Tehran is a capital of Iran, with the capitals of the world to natural disasters such as earthquake and flood vulnerable has known. City has stated on three faults, Ray, Mosha, and north according to report of JICA in 2000, the most casualties and destruction was the result of active fault Ray. In 2003, the prevention and management of crisis in Tehran to conduct prevention and rehabilitation of the city, under the Ministry has active. Given the breadth and lack of appropriate access in the city, was considered decentralized management for crisis management support, in each region, in order to position the crisis management headquarters at the time of crises and implementation of programs for prevention and education of the citizens and also to position the bases given in some areas of the neighboring provinces at the time of the accident for help and a number of databases to store food and equipment needed at the time of the disaster. In this study, the bases for one, six, nine and eleven regions of Tehran in the field of management and training are evaluated. Selected areas had local accident and experience of practice for disaster management and local training has been experiencing challenges. The research approach was used qualitative research methods underlying Ground theory. At first, the information obtained through the study of documents and Semi-structured interviews by administrators, officials of training and participant observation in the classroom, line by line, and then it was coded in two stages, by comparing and questioning concepts, categories and extract according to the indicators is obtained from literature studies, subjects were been central. Main articles according to the frequency and importance of the phenomenon were called and they were drawn diagram paradigm and at the end with the intersections phenomena and their causes with indicators extracted from the texts, approach each phenomenon and the effectiveness of the bases was measured. There are two phenomenons in management; 1. The inability to manage the vast and complex crisis events and to resolve minor incidents due to the mismatch between managers. 2. Weaknesses in the implementation of preventive measures and preparedness to manage crisis is causal of situations, fields and intervening. There are five phenomenons in the field of education; 1. In the six-region participation and interest is high. 2. In eleven-region training partnerships for crisis management were to low that next by maneuver in schools and local initiatives such as advertising and use of aid groups have increased. 3. In nine-region, contributions to education in the area of crisis management at the beginning were low that initiatives like maneuver in schools and communities to stimulate and increase participation have increased sensitivity. 4. Managers have been disagreement with the same training in all areas. Finally for the issues that are causing the main issues, with the help of concepts extracted from the literature, recommendations are provided.

Keywords: crises management, crisis management support bases, vulnerability, crisis management headquarters, prevention

Procedia PDF Downloads 150
38 UV-Cured Thiol-ene Based Polymeric Phase Change Materials for Thermal Energy Storage

Authors: M. Vezir Kahraman, Emre Basturk

Abstract:

Energy storage technology offers new ways to meet the demand to obtain efficient and reliable energy storage materials. Thermal energy storage systems provide the potential to acquire energy savings, which in return decrease the environmental impact related to energy usage. For this purpose, phase change materials (PCMs) that work as 'latent heat storage units' which can store or release large amounts of energy are preferred. Phase change materials (PCMs) are being utilized to absorb, collect and discharge thermal energy during the cycle of melting and freezing, converting from one phase to another. Phase Change Materials (PCMs) can generally be arranged into three classes: organic materials, salt hydrates and eutectics. Many kinds of organic and inorganic PCMs and their blends have been examined as latent heat storage materials. PCMs have found different application areas such as solar energy storage and transfer, HVAC (Heating, Ventilating and Air Conditioning) systems, thermal comfort in vehicles, passive cooling, temperature controlled distributions, industrial waste heat recovery, under floor heating systems and modified fabrics in textiles. Ultraviolet (UV)-curing technology has many advantages, which made it applicable in many different fields. Low energy consumption, high speed, room-temperature operation, low processing costs, high chemical stability, and being environmental friendly are some of its main benefits. UV-curing technique has many applications. One of the many advantages of UV-cured PCMs is that they prevent the interior PCMs from leaking. Shape-stabilized PCM is prepared by blending the PCM with a supporting material, usually polymers. In our study, this problem is minimized by coating the fatty alcohols with a photo-cross-linked thiol-ene based polymeric system. Leakage is minimized because photo-cross-linked polymer acts a matrix. The aim of this study is to introduce a novel thiol-ene based shape-stabilized PCM. Photo-crosslinked thiol-ene based polymers containing fatty alcohols were prepared and characterized for the purpose of phase change materials (PCMs). Different types of fatty alcohols were used in order to investigate their properties as shape-stable PCMs. The structure of the PCMs was confirmed by ATR-FTIR techniques. The phase transition behaviors, thermal stability of the prepared photo-crosslinked PCMs were investigated by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). This work was supported by Marmara University, Commission of Scientific Research Project.

Keywords: differential scanning calorimetry (DSC), Polymeric phase change material, thermal energy storage, UV-curing

Procedia PDF Downloads 201
37 Dynamic-cognition of Strategic Mineral Commodities; An Empirical Assessment

Authors: Carlos Tapia Cortez, Serkan Saydam, Jeff Coulton, Claude Sammut

Abstract:

Strategic mineral commodities (SMC) both energetic and metals have long been fundamental for human beings. There is a strong and long-run relation between the mineral resources industry and society's evolution, with the provision of primary raw materials, becoming one of the most significant drivers of economic growth. Due to mineral resources’ relevance for the entire economy and society, an understanding of the SMC market behaviour to simulate price fluctuations has become crucial for governments and firms. For any human activity, SMC price fluctuations are affected by economic, geopolitical, environmental, technological and psychological issues, where cognition has a major role. Cognition is defined as the capacity to store information in memory, processing and decision making for problem-solving or human adaptation. Thus, it has a significant role in those systems that exhibit dynamic equilibrium through time, such as economic growth. Cognition allows not only understanding past behaviours and trends in SCM markets but also supports future expectations of demand/supply levels and prices, although speculations are unavoidable. Technological developments may also be defined as a cognitive system. Since the Industrial Revolution, technological developments have had a significant influence on SMC production costs and prices, likewise allowing co-integration between commodities and market locations. It suggests a close relation between structural breaks, technology and prices evolution. SCM prices forecasting have been commonly addressed by econometrics and Gaussian-probabilistic models. Econometrics models may incorporate the relationship between variables; however, they are statics that leads to an incomplete approach of prices evolution through time. Gaussian-probabilistic models may evolve through time; however, price fluctuations are addressed by the assumption of random behaviour and normal distribution which seems to be far from the real behaviour of both market and prices. Random fluctuation ignores the evolution of market events and the technical and temporal relation between variables, giving the illusion of controlled future events. Normal distribution underestimates price fluctuations by using restricted ranges, curtailing decisions making into a pre-established space. A proper understanding of SMC's price dynamics taking into account the historical-cognitive relation between economic, technological and psychological factors over time is fundamental in attempting to simulate prices. The aim of this paper is to discuss the SMC market cognition hypothesis and empirically demonstrate its dynamic-cognitive capacity. Three of the largest and traded SMC's: oil, copper and gold, will be assessed to examine the economic, technological and psychological cognition respectively.

Keywords: commodity price simulation, commodity price uncertainties, dynamic-cognition, dynamic systems

Procedia PDF Downloads 433
36 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions

Authors: Gaurangi Saxena, Ravindra Saxena

Abstract:

Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.

Keywords: cloud computing, competitive advantage, customer relationship management, grid computing

Procedia PDF Downloads 279
35 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools

Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami

Abstract:

The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.

Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design

Procedia PDF Downloads 42
34 Fuzzy Availability Analysis of a Battery Production System

Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz

Abstract:

In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.

Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)

Procedia PDF Downloads 193
33 Teaching Timber: The Role of the Architectural Student and Studio Course within an Interdisciplinary Research Project

Authors: Catherine Sunter, Marius Nygaard, Lars Hamran, Børre Skodvin, Ute Groba

Abstract:

Globally, the construction and operation of buildings contribute up to 30% of annual green house gas emissions. In addition, the building sector is responsible for approximately a third of global waste. In this context, the utilization of renewable resources in buildings, especially materials that store carbon, will play a significant role in the growing city. These are two reasons for introducing wood as a building material with a growing relevance. A third is the potential economic value in countries with a forest industry that is not currently used to capacity. In 2013, a four-year interdisciplinary research project titled “Wood Be Better” was created, with the principle goal to produce and publicise knowledge that would facilitate increased use of wood in buildings in urban areas. The research team consisted of architects, engineers, wood technologists and mycologists, both from research institutions and industrial organisations. Five structured work packages were included in the initial research proposal. Work package 2 was titled “Design-based research” and proposed using architecture master courses as laboratories for systematic architectural exploration. The aim was twofold: to provide students with an interdisciplinary team of experts from consultancies and producers, as well as teachers and researchers, that could offer the latest information on wood technologies; whilst at the same time having the studio course test the effects of the use of wood on the functional, technical and tectonic quality within different architectural projects on an urban scale, providing results that could be fed back into the research material. The aim of this article is to examine the successes and failures of this pedagogical approach in an architecture school, as well as the opportunities for greater integration between academic research projects, industry experts and studio courses in the future. This will be done through a set of qualitative interviews with researchers, teaching staff and students of the studio courses held each semester since spring 2013. These will investigate the value of the various experts of the course; the different themes of each course; the response to the urban scale, architectural form and construction detail; the effect of working with the goals of a research project; and the value of the studio projects to the research. In addition, six sample projects will be presented as case studies. These will show how the projects related to the research and could be collected and further analysed, innovative solutions that were developed during the course, different architectural expressions that were enabled by timber, and how projects were used as an interdisciplinary testing ground for integrated architectural and engineering solutions between the participating institutions. The conclusion will reflect on the original intentions of the studio courses, the opportunities and challenges faced by students, researchers and teachers, the educational implications, and on the transparent and inclusive discourse between the architectural researcher, the architecture student and the interdisciplinary experts.

Keywords: architecture, interdisciplinary, research, studio, students, wood

Procedia PDF Downloads 282
32 Improving Working Memory in School Children through Chess Training

Authors: Veena Easvaradoss, Ebenezer Joseph, Sumathi Chandrasekaran, Sweta Jain, Aparna Anna Mathai, Senta Christy

Abstract:

Working memory refers to a cognitive processing space where information is received, managed, transformed, and briefly stored. It is an operational process of transforming information for the execution of cognitive tasks in different and new ways. Many class room activities require children to remember information and mentally manipulate it. While the impact of chess training on intelligence and academic performance has been unequivocally established, its impact on working memory needs to be studied. This study, funded by the Cognitive Science Research Initiative, Department of Science & Technology, Government of India, analyzed the effect of one-year chess training on the working memory of children. A pretest–posttest with control group design was used, with 52 children in the experimental group and 50 children in the control group. The sample was selected from children studying in school (grades 3 to 9), which included both the genders. The experimental group underwent weekly chess training for one year, while the control group was involved in extracurricular activities. Working memory was measured by two subtests of WISC-IV INDIA. The Digit Span Subtest involves recalling a list of numbers of increasing length presented orally in forward and in reverse order, and the Letter–Number Sequencing Subtest involves rearranging jumbled alphabets and numbers presented orally following a given rule. Both tasks require the child to receive and briefly store information, manipulate it, and present it in a changed format. The Children were trained using Winning Moves curriculum, audio- visual learning method, hands-on- chess training and recording the games using score sheets, analyze their mistakes, thereby increasing their Meta-Analytical abilities. They were also trained in Opening theory, Checkmating techniques, End-game theory and Tactical principles. Pre equivalence of means was established. Analysis revealed that the experimental group had significant gains in working memory compared to the control group. The present study clearly establishes a link between chess training and working memory. The transfer of chess training to the improvement of working memory could be attributed to the fact that while playing chess, children evaluate positions, visualize new positions in their mind, analyze the pros and cons of each move, and choose moves based on the information stored in their mind. If working-memory’s capacity could be expanded or made to function more efficiently, it could result in the improvement of executive functions as well as the scholastic performance of the child.

Keywords: chess training, cognitive development, executive functions, school children, working memory

Procedia PDF Downloads 236
31 Gold-Mediated Modification of Apoferritin Surface with Targeting Antibodies

Authors: Simona Dostalova, Pavel Kopel, Marketa Vaculovicova, Vojtech Adam, Rene Kizek

Abstract:

Protein apoferritin seems to be a very promising structure for use as a nanocarrier. It is prepared from intracellular ferritin protein naturally found in most organisms. The role of ferritin proteins is to store and transport ferrous ions. Apoferritin is a hollow protein cage without ferrous ions that can be prepared from ferritin by reduction with thioglycolic acid or dithionite. The structure of apoferritin is composed of 24 protein subunits, creating a sphere with 12 nm in diameter. The inner cavity has a diameter of 8 nm. The drug encapsulation process is based on the response of apoferritin structure to the pH changes of surrounding solution. In low pH, apoferritin is disassembled into individual subunits and its structure is “opened”. It can then be mixed with any desired cytotoxic drug and after adjustment of pH back to neutral the subunits are reconnected again and the drug is encapsulated within the apoferritin particles. Excess drug molecules can be removed by dialysis. The receptors for apoferritin, SCARA5 and TfR1 can be found in the membrane of both healthy and cancer cells. To enhance the specific targeting of apoferritin nanocarrier, it is possible to modify its surface with targeting moieties, such as antibodies. To ensure sterically correct complex, we used a a peptide linker based on a protein G with N-terminus affinity towards Fc region of antibodies. To connect the peptide to the surface of apoferritin, the C-terminus of peptide was made of cysteine with affinity to gold. The surface of apoferritin with encapsulated doxorubicin (ApoDox) was coated either with gold nanoparticles (ApoDox-Nano) or gold (III) chloride hydrate reduced with sodium borohydride (ApoDox-HAu). The applied amount of gold in form of gold (III) chloride hydrate was 10 times higher than in the case of gold nanoparticles. However, after removal of the excess unbound ions by electrophoretic separation, the concentration of gold on the surface of apoferritin was only 6 times higher for ApoDox-HAu in comparison with ApoDox-Nano. Moreover, the reduction with sodium borohydride caused a loss of doxorubicin fluorescent properties (excitation maximum at 480 nm with emission maximum at 600 nm) and thus its biological activity. Fluorescent properties of ApoDox-Nano were similar to the unmodified ApoDox, therefore it was more suited for the intended use. To evaluate the specificity of apoferritin modified with antibodies, we used ELISA-like method with the surface of microtitration plate wells coated by the antigen (goat anti-human IgG antibodies). To these wells, we applied ApoDox without targeting antibodies and ApoDox-Nano modified with targeting antibodies (human IgG antibodies). The amount of unmodified ApoDox on antigen after incubation and subsequent rinsing with water was 5 times lower than in the case of ApoDox-Nano modified with targeting antibodies. The modification of non-gold ApoDox with antibodies caused no change in its targeting properties. It can therefore be concluded that the demonstrated procedure allows us to create nanocarrier with enhanced targeting properties, suitable for nanomedicine.

Keywords: apoferritin, doxorubicin, nanocarrier, targeting antibodies

Procedia PDF Downloads 362
30 Customer Focus in Digital Economy: Case of Russian Companies

Authors: Maria Evnevich

Abstract:

In modern conditions, in most markets, price competition is becoming less effective. On the one hand, there is a gradual decrease in the level of marginality in main traditional sectors of the economy, so further price reduction becomes too ‘expensive’ for the company. On the other hand, the effect of price reduction is leveled, and the reason for this phenomenon is likely to be informational. As a result, it turns out that even if the company reduces prices, making its products more accessible to the buyer, there is a high probability that this will not lead to increase in sales unless additional large-scale advertising and information campaigns are conducted. Similarly, a large-scale information and advertising campaign have a much greater effect itself than price reductions. At the same time, the cost of mass informing is growing every year, especially when using the main information channels. The article presents generalization, systematization and development of theoretical approaches and best practices in the field of customer focus approach to business management and in the field of relationship marketing in the modern digital economy. The research methodology is based on the synthesis and content-analysis of sociological and marketing research and on the study of the systems of working with consumer appeals and loyalty programs in the 50 largest client-oriented companies in Russia. Also, the analysis of internal documentation on customers’ purchases in one of the largest retail companies in Russia allowed to identify if buyers prefer to buy goods for complex purchases in one retail store with the best price image for them. The cost of attracting a new client is now quite high and continues to grow, so it becomes more important to keep him and increase the involvement through marketing tools. A huge role is played by modern digital technologies used both in advertising (e-mailing, SEO, contextual advertising, banner advertising, SMM, etc.) and in service. To implement the above-described client-oriented omnichannel service, it is necessary to identify the client and work with personal data provided when filling in the loyalty program application form. The analysis of loyalty programs of 50 companies identified the following types of cards: discount cards, bonus cards, mixed cards, coalition loyalty cards, bank loyalty programs, aviation loyalty programs, hybrid loyalty cards, situational loyalty cards. The use of loyalty cards allows not only to stimulate the customer to purchase ‘untargeted’, but also to provide individualized offers, as well as to produce more targeted information. The development of digital technologies and modern means of communication has significantly changed not only the sphere of marketing and promotion, but also the economic landscape as a whole. Factors of competitiveness are the digital opportunities of companies in the field of customer orientation: personalization of service, customization of advertising offers, optimization of marketing activity and improvement of logistics.

Keywords: customer focus, digital economy, loyalty program, relationship marketing

Procedia PDF Downloads 137
29 Consumer Utility Analysis of Halal Certification on Beef Using Discrete Choice Experiment: A Case Study in the Netherlands

Authors: Rosa Amalia Safitri, Ine van der Fels-Klerx, Henk Hogeveen

Abstract:

Halal is a dietary law observed by people following Islamic faith. It is considered as a type of credence food quality which cannot be easily assured by consumers even upon and after consumption. Therefore, Halal certification takes place as a practical tool for the consumers to make an informed choice particularly in a non-Muslim majority country, including the Netherlands. Discrete choice experiment (DCE) was employed in this study for its ability to assess the importance of attributes attached to Halal beef in the Dutch market and to investigate consumer utilities. Furthermore, willingness to pay (WTP) for the desired Halal certification was estimated. Four most relevant attributes were selected, i.e., the slaughter method, traceability information, place of purchase, and Halal certification. Price was incorporated as an attribute to allow estimation of willingness to pay for Halal certification. There were 242 Muslim respondents who regularly consumed Halal beef completed the survey, from Dutch (53%) and non-Dutch consumers living in the Netherlands (47%). The vast majority of the respondents (95%) were within the age of 18-45 years old, with the largest group being student (43%) followed by employee (30%) and housewife (12%). Majority of the respondents (76%) had disposable monthly income less than € 2,500, while the rest earned more than € 2,500. The respondents assessed themselves of having good knowledge of the studied attributes, except for traceability information with 62% of the respondents considered themselves not knowledgeable. The findings indicated that slaughter method was valued as the most important attribute, followed by Halal certificate, place of purchase, price, and traceability information. This order of importance varied across sociodemographic variables, except for the slaughter method. Both Dutch and non-Dutch subgroups valued Halal certification as the third most important attributes. However, non-Dutch respondents valued it with higher importance (0,20) than their Dutch counterparts (0,16). For non-Dutch, the price was more important than Halal certification. The ideal product preferred by the consumers indicated the product serving the highest utilities for consumers, and characterized by beef obtained without pre-slaughtering stunning, with traceability info, available at Halal store, certified by an official certifier, and sold at 2.75 € per 500 gr. In general, an official Halal certifier was mostly preferred. However, consumers were not willing to pay for premium for any type of Halal certifiers, indicated by negative WTP of -0.73 €, -0.93 €, and -1,03€ for small, official, and international certifiers, respectively. This finding indicated that consumers tend to lose their utility when confronted with price. WTP estimates differ across socio-demographic variables with male and non-Dutch respondents had the lowest WTP. The unfamiliarity to traceability information might cause respondents to perceive it as the least important attribute. In the context of Halal certified meat, adding traceability information into meat packaging can serve two functions, first consumers can justify for themselves whether the processes comply with Halal requirements, for example, the use of pre-slaughtering stunning, and secondly to assure its safety. Therefore, integrating traceability info into meat packaging can help to make informed decision for both Halal status and food safety.

Keywords: consumer utilities, discrete choice experiments, Halal certification, willingness to pay

Procedia PDF Downloads 100
28 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 350
27 The Potential Impact of Big Data Analytics on Pharmaceutical Supply Chain Management

Authors: Maryam Ziaee, Himanshu Shee, Amrik Sohal

Abstract:

Big Data Analytics (BDA) in supply chain management has recently drawn the attention of academics and practitioners. Big data refers to a massive amount of data from different sources, in different formats, generated at high speed through transactions in business environments and supply chain networks. Traditional statistical tools and techniques find it difficult to analyse this massive data. BDA can assist organisations to capture, store, and analyse data specifically in the field of supply chain. Currently, there is a paucity of research on BDA in the pharmaceutical supply chain context. In this research, the Australian pharmaceutical supply chain was selected as the case study. This industry is highly significant since the right medicine must reach the right patients, at the right time, in right quantity, in good condition, and at the right price to save lives. However, drug shortages remain a substantial problem for hospitals across Australia with implications on patient care, staff resourcing, and expenditure. Furthermore, a massive volume and variety of data is generated at fast speed from multiple sources in pharmaceutical supply chain, which needs to be captured and analysed to benefit operational decisions at every stage of supply chain processes. As the pharmaceutical industry lags behind other industries in using BDA, it raises the question of whether the use of BDA can improve transparency among pharmaceutical supply chain by enabling the partners to make informed-decisions across their operational activities. This presentation explores the impacts of BDA on supply chain management. An exploratory qualitative approach was adopted to analyse data collected through interviews. This study also explores the BDA potential in the whole pharmaceutical supply chain rather than focusing on a single entity. Twenty semi-structured interviews were undertaken with top managers in fifteen organisations (five pharmaceutical manufacturers, five wholesalers/distributors, and five public hospital pharmacies) to investigate their views on the use of BDA. The findings revealed that BDA can enable pharmaceutical entities to have improved visibility over the whole supply chain and also the market; it enables entities, especially manufacturers, to monitor consumption and the demand rate in real-time and make accurate demand forecasts which reduce drug shortages. Timely and precise decision-making can allow the entities to source and manage their stocks more effectively. This can likely address the drug demand at hospitals and respond to unanticipated issues such as drug shortages. Earlier studies explore BDA in the context of clinical healthcare; however, this presentation investigates the benefits of BDA in the Australian pharmaceutical supply chain. Furthermore, this research enhances managers’ insight into the potentials of BDA at every stage of supply chain processes and helps to improve decision-making in their supply chain operations. The findings will turn the rhetoric of data-driven decision into a reality where the managers may opt for analytics for improved decision-making in the supply chain processes.

Keywords: big data analytics, data-driven decision, pharmaceutical industry, supply chain management

Procedia PDF Downloads 79
26 Development of a Social Assistive Robot for Elderly Care

Authors: Edwin Foo, Woei Wen, Lui, Meijun Zhao, Shigeru Kuchii, Chin Sai Wong, Chung Sern Goh, Yi Hao He

Abstract:

This presentation presents an elderly care and assistive social robot development work. We named this robot JOS and he is restricted to table top operation. JOS is designed to have a maximum volume of 3600 cm3 with its base restricted to 250 mm and his mission is to provide companion, assist and help the elderly. In order for JOS to accomplish his mission, he will be equipped with perception, reaction and cognition capability. His appearance will be not human like but more towards cute and approachable type. JOS will also be designed to be neutral gender. However, the robot will still have eyes, eyelid and a mouth. For his eyes and eyelids, they will be built entirely with Robotis Dynamixel AX18 motor. To realize this complex task, JOS will be also be equipped with micro-phone array, vision camera and Intel i5 NUC computer and a powered by a 12 V lithium battery that will be self-charging. His face is constructed using 1 motor each for the eyelid, 2 motors for the eyeballs, 3 motors for the neck mechanism and 1 motor for the lips movement. The vision senor will be house on JOS forehead and the microphone array will be somewhere below the mouth. For the vision system, Omron latest OKAO vision sensor is used. It is a compact and versatile sensor that is only 60mm by 40mm in size and operates with only 5V supply. In addition, OKAO vision sensor is capable of identifying the user and recognizing the expression of the user. With these functions, JOS is able to track and identify the user. If he cannot recognize the user, JOS will ask the user if he would want him to remember the user. If yes, JOS will store the user information together with the capture face image into a database. This will allow JOS to recognize the user the next time the user is with JOS. In addition, JOS is also able to interpret the mood of the user through the facial expression of the user. This will allow the robot to understand the user mood and behavior and react according. Machine learning will be later incorporated to learn the behavior of the user so as to understand the mood of the user and requirement better. For the speech system, Microsoft speech and grammar engine is used for the speech recognition. In order to use the speech engine, we need to build up a speech grammar database that captures the commonly used words by the elderly. This database is built from research journals and literature on elderly speech and also interviewing elderly what do they want to robot to assist them with. Using the result from the interview and research from journal, we are able to derive a set of common words the elderly frequently used to request for the help. It is from this set that we build up our grammar database. In situation where there is more than one person near JOS, he is able to identify the person who is talking to him through an in-house developed microphone array structure. In order to make the robot more interacting, we have also included the capability for the robot to express his emotion to the user through the facial expressions by changing the position and movement of the eyelids and mouth. All robot emotions will be in response to the user mood and request. Lastly, we are expecting to complete this phase of project and test it with elderly and also delirium patient by Feb 2015.

Keywords: social robot, vision, elderly care, machine learning

Procedia PDF Downloads 417