Search results for: building energy optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13822

Search results for: building energy optimization

1222 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling

Authors: Justyna P. Majewska, Szymon M. Truskolaski

Abstract:

The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.

Keywords: agent-based modeling, digitalized services, e-sport, spectators motives

Procedia PDF Downloads 162
1221 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea

Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal

Abstract:

Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.

Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism

Procedia PDF Downloads 259
1220 Evaluation of Redundancy Architectures Based on System on Chip Internal Interfaces for Future Unmanned Aerial Vehicles Flight Control Computer

Authors: Sebastian Hiergeist

Abstract:

It is a common view that Unmanned Aerial Vehicles (UAV) tend to migrate into the civil airspace. This trend is challenging UAV manufacturer in plenty ways, as there come up a lot of new requirements and functional aspects. On the higher application levels, this might be collision detection and avoidance and similar features, whereas all these functions only act as input for the flight control components of the aircraft. The flight control computer (FCC) is the central component when it comes up to ensure a continuous safe flight and landing. As these systems are flight critical, they have to be built up redundantly to be able to provide a Fail-Operational behavior. Recent architectural approaches of FCCs used in UAV systems are often based on very simple microprocessors in combination with proprietary Application-Specific Integrated Circuit (ASIC) or Field Programmable Gate Array (FPGA) extensions implementing the whole redundancy functionality. In the future, such simple microprocessors may not be available anymore as they are more and more replaced by higher sophisticated System on Chip (SoC). As the avionic industry cannot provide enough market power to significantly influence the development of new semiconductor products, the use of solutions from foreign markets is almost inevitable. Products stemming from the industrial market developed according to IEC 61508, or automotive SoCs, according to ISO 26262, can be seen as candidates as they have been developed for similar environments. Current available SoC from the industrial or automotive sector provides quite a broad selection of interfaces like, i.e., Ethernet, SPI or FlexRay, that might come into account for the implementation of a redundancy network. In this context, possible network architectures shall be investigated which could be established by using the interfaces stated above. Of importance here is the avoidance of any single point of failures, as well as a proper segregation in distinct fault containment regions. The performed analysis is supported by the use of guidelines, published by the aviation authorities (FAA and EASA), on the reliability of data networks. The main focus clearly lies on the reachable level of safety, but also other aspects like performance and determinism play an important role and are considered in the research. Due to the further increase in design complexity of recent and future SoCs, also the risk of design errors, which might lead to common mode faults, increases. Thus in the context of this work also the aspect of dissimilarity will be considered to limit the effect of design errors. To achieve this, the work is limited to broadly available interfaces available in products from the most common silicon manufacturer. The resulting work shall support the design of future UAV FCCs by giving a guideline on building up a redundancy network between SoCs, solely using on board interfaces. Therefore the author will provide a detailed usability analysis on available interfaces provided by recent SoC solutions, suggestions on possible redundancy architectures based on these interfaces and an assessment of the most relevant characteristics of the suggested network architectures, like e.g. safety or performance.

Keywords: redundancy, System-on-Chip, UAV, flight control computer (FCC)

Procedia PDF Downloads 209
1219 The Diverse and Flexible Coping Strategies Simulation for Maanshan Nuclear Power Plant

Authors: Chin-Hsien Yeh, Shao-Wen Chen, Wen-Shu Huang, Chun-Fu Huang, Jong-Rong Wang, Jung-Hua Yang, Yuh-Ming Ferng, Chunkuan Shih

Abstract:

In this research, a Fukushima-like conditions is simulated with TRACE and RELAP5. Fukushima Daiichi Nuclear Power Plant (NPP) occurred the disaster which caused by the earthquake and tsunami. This disaster caused extended loss of all AC power (ELAP). Hence, loss of ultimate heat sink (LUHS) happened finally. In order to handle Fukushima-like conditions, Taiwan Atomic Energy Council (AEC) commanded that Taiwan Power Company should propose strategies to ensure the nuclear power plant safety. One of the diverse and flexible coping strategies (FLEX) is a different water injection strategy. It can execute core injection at 20 Kg/cm2 without depressurization. In this study, TRACE and RELAP5 were used to simulate Maanshan nuclear power plant, which is a three loops PWR in Taiwan, under Fukushima-like conditions and make sure the success criteria of FLEX. Reducing core cooling ability is due to failure of emergency core cooling system (ECCS) in extended loss of all AC power situation. The core water level continues to decline because of the seal leakage, and then FLEX is used to save the core water level and make fuel rods covered by water. The result shows that this mitigation strategy can cool the reactor pressure vessel (RPV) as soon as possible under Fukushima-like conditions, and keep the core water level higher than Top of Active Fuel (TAF). The FLEX can ensure the peak cladding temperature (PCT) below than the criteria 1088.7 K. Finally, the FLEX can provide protection for nuclear power plant and make plant safety.

Keywords: TRACE, RELAP5/MOD3.3, ELAP, FLEX

Procedia PDF Downloads 241
1218 The Negative Use of the Concept of Agape Love in the New Testament

Authors: Marny S. Menkes Lemmel

Abstract:

Upon hearing or reading the term agape love in a Christian context, one typically thinks of God's love for people and the type of love people should have for God and others. While C.S. Lewis, a significant propagator of this view, and others with a similar opinion are correct in their knowledge of agape in the New Testament in most occurrences, nonetheless, examples of this term appear in the New Testament having quite a different sense. The New World Encyclopedia, regarding the verb form of agape, 'agapao,' comments that it is occasionally used also in a negative sense, but here and elsewhere, there is no elaboration on the significance of these negative instances. If intensity and sacrifice are the crucial constituents of God's agape love and that of his followers, who are commanded to love as God does, the negative instances of this term in the New Testament conceivably indicate that a person's love for improper recipients is likewise intense and sacrificial. This is significant because one who has chosen to direct such love neither to God nor his "neighbors," but to inanimate things or status, clearly shows his priorities, having decided to put all his energy and resources into them while demeaning those for whom God has required such love, including God himself. It is not merely a matter of a person dividing his agape love among several proper objects of that love, but of directing it toward improper targets. Not to heed God's commands regarding whom to love is to break God's entire law, and not to love whom one should, but to love what one should not, is not merely a matter of indifference, but is disloyalty and loathing. An example of such use of the term agape occurs in Luke 11:43 where the Pharisees do not and cannot love God at the same time as loving a place of honor in the synagogues and greetings in the public arena. The exclamation of their dire peril because of their love for the latter reveals that the previously mentioned love objects are not in God's gamut of proper recipients. Furthermore, it appears to be a logical conclusion that since the Pharisees love the latter, they likewise despise God and those whom God requires his people to love. Conversely, the objects of the Pharisees' love in this verse should be what followers of God ought to despise and avoid. In short, appearances of the use of the verb agapao in a negative context are blatant antitheses to what God expects and should alert the reader or listener to take notice. These negative uses are worthy of further discussion than a brief aside by scholars of their existence without additional comment.

Keywords: agape love, divine commands, focus, new testament context, sacrificial

Procedia PDF Downloads 216
1217 Calculation of Secondary Neutron Dose Equivalent in Proton Therapy of Thyroid Gland Using FLUKA Code

Authors: M. R. Akbari, M. Sadeghi, R. Faghihi, M. A. Mosleh-Shirazi, A. R. Khorrami-Moghadam

Abstract:

Proton radiotherapy (PRT) is becoming an established treatment modality for cancer. The localized tumors, the same as undifferentiated thyroid tumors are insufficiently handled by conventional radiotherapy, while protons would propose the prospect of increasing the tumor dose without exceeding the tolerance of the surrounding healthy tissues. In spite of relatively high advantages in giving localized radiation dose to the tumor region, in proton therapy, secondary neutron production can have significant contribution on integral dose and lessen advantages of this modality contrast to conventional radiotherapy techniques. Furthermore, neutrons have high quality factor, therefore, even a small physical dose can cause considerable biological effects. Measuring of this neutron dose is a very critical step in prediction of secondary cancer incidence. It has been found that FLUKA Monte Carlo code simulations have been used to evaluate dose due to secondaries in proton therapy. In this study, first, by validating simulated proton beam range in water phantom with CSDA range from NIST for the studied proton energy range (34-54 MeV), a proton therapy in thyroid gland cancer was simulated using FLUKA code. Secondary neutron dose equivalent of some organs and tissues after the target volume caused by 34 and 54 MeV proton interactions were calculated in order to evaluate secondary cancer incidence. A multilayer cylindrical neck phantom considering all the layers of neck tissues and a proton beam impinging normally on the phantom were also simulated. Trachea (accompanied by Larynx) had the greatest dose equivalent (1.24×10-1 and 1.45 pSv per primary 34 and 54 MeV protons, respectively) among the simulated tissues after the target volume in the neck region.

Keywords: FLUKA code, neutron dose equivalent, proton therapy, thyroid gland

Procedia PDF Downloads 416
1216 Characterization of Single-Walled Carbon Nano Tubes Forest Decorated with Chromium

Authors: Ana Paula Mousinho, Ronaldo D. Mansano, Nelson Ordonez

Abstract:

Carbon nanotubes are one of the main elements in nanotechnologies; their applications are in microelectronics, nano-electronics devices (photonics, spintronic), chemical sensors, structural material and currently in clean energy devices (supercapacitors and fuel cells). The use of magnetic particle decorated carbon nanotubes increases the applications in magnetic devices, magnetic memory, and magnetic oriented drug delivery. In this work, single-walled carbon nanotubes (CNTs) forest decorated with chromium were deposited at room temperature by high-density plasma chemical vapor deposition (HDPCVD) system. The CNTs forest was obtained using pure methane plasmas and chromium, as precursor material (seed) and for decorating the CNTs. Magnetron sputtering deposited the chromium on silicon wafers before the CNTs' growth. Scanning electron microscopy, atomic force microscopy, micro-Raman spectroscopy, and X-ray diffraction characterized the single-walled CNTs forest decorated with chromium. In general, the CNTs' spectra show a unique emission band, but due to the presence of the chromium, the spectra obtained in this work showed many bands that are related to the CNTs with different diameters. The CNTs obtained by the HDPCVD system are highly aligned and showed metallic features, and they can be used as photonic material, due to the unique structural and electrical properties. The results of this work proved the possibility of obtaining the controlled deposition of aligned single-walled CNTs forest films decorated with chromium by high-density plasma chemical vapor deposition system.

Keywords: CNTs forest, high density plasma deposition, high-aligned CNTs, nanomaterials

Procedia PDF Downloads 111
1215 Kinetics and Mechanism Study of Photocatalytic Degradation Using Heterojunction Semiconductors

Authors: Ksenija Milošević, Davor Lončarević, Tihana Mudrinić, Jasmina Dostanić

Abstract:

Heterogeneous photocatalytic processes have gained growing interest as an efficient method to generate hydrogen by using clean energy sources and degrading various organic pollutants. The main obstacles that restrict efficient photoactivity are narrow light-response range and high rates of charge carrier recombination. The formation of heterojunction by combining a semiconductor with low VB and a semiconductor with high CB and a suitable band gap was found to be an efficient method to prepare more sensible materials with improved charge separation, appropriate oxidation and reduction ability, and enhanced visible-light harvesting. In our research, various binary heterojunction systems based on the wide-band gap (TiO₂) and narrow bandgap (g-C₃N₄, CuO, and Co₂O₃) photocatalyst were studied. The morphology, optical, and electrochemical properties of the photocatalysts were analyzed by X-ray diffraction (XRD), scanning electron microscopy (FE-SEM), N₂ physisorption, diffuse reflectance measurements (DRS), and Mott-Schottky analysis. The photocatalytic performance of the synthesized catalysts was tested in single and simultaneous systems. The synthesized photocatalysts displayed good adsorption capacity and enhanced visible-light photocatalytic performance. The mutual interactions of pollutants on their adsorption and degradation efficiency were investigated. The interfacial connection between photocatalyst constituents and the mechanism of the transport pathway of photogenerated charge species was discussed. A radical scavenger study revealed the interaction mechanisms of the photocatalyst constituents in single and multiple pollutant systems under solar and visible light irradiation, indicating the type of heterojunction system (Z scheme or type II).

Keywords: bandgap alignment, heterojunction, photocatalysis, reaction mechanism

Procedia PDF Downloads 94
1214 Modeling and Design of E-mode GaN High Electron Mobility Transistors

Authors: Samson Mil'shtein, Dhawal Asthana, Benjamin Sullivan

Abstract:

The wide energy gap of GaN is the major parameter justifying the design and fabrication of high-power electronic components made of this material. However, the existence of a piezo-electrics in nature sheet charge at the AlGaN/GaN interface complicates the control of carrier injection into the intrinsic channel of GaN HEMTs (High Electron Mobility Transistors). As a result, most of the transistors created as R&D prototypes and all of the designs used for mass production are D-mode devices which introduce challenges in the design of integrated circuits. This research presents the design and modeling of an E-mode GaN HEMT with a very low turn-on voltage. The proposed device includes two critical elements allowing the transistor to achieve zero conductance across the channel when Vg = 0V. This is accomplished through the inclusion of an extremely thin, 2.5nm intrinsic Ga₀.₇₄Al₀.₂₆N spacer layer. The added spacer layer does not create piezoelectric strain but rather elastically follows the variations of the crystal structure of the adjacent GaN channel. The second important factor is the design of a gate metal with a high work function. The use of a metal gate with a work function (Ni in this research) greater than 5.3eV positioned on top of n-type doped (Nd=10¹⁷cm⁻³) Ga₀.₇₄Al₀.₂₆N creates the necessary built-in potential, which controls the injection of electrons into the intrinsic channel as the gate voltage is increased. The 5µm long transistor with a 0.18µm long gate and a channel width of 30µm operate at Vd=10V. At Vg =1V, the device reaches the maximum drain current of 0.6mA, which indicates a high current density. The presented device is operational at frequencies greater than 10GHz and exhibits a stable transconductance over the full range of operational gate voltages.

Keywords: compound semiconductors, device modeling, enhancement mode HEMT, gallium nitride

Procedia PDF Downloads 250
1213 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip

Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas

Abstract:

A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.

Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration

Procedia PDF Downloads 380
1212 Permeable Bio-Reactive Barriers to Tackle Petroleum Hydrocarbon Contamination in the Sub-Antarctic

Authors: Benjamin L. Freidman, Sally L. Gras, Ian Snape, Geoff W. Stevens, Kathryn A. Mumford

Abstract:

Increasing transportation and storage of petroleum hydrocarbons in Antarctic and sub-Antarctic regions have resulted in frequent accidental spills. Migrating petroleum hydrocarbon spills can have a significant impact on terrestrial and marine ecosystems in cold regions, as harsh environmental conditions result in heightened sensitivity to pollution. This migration of contaminants has led to the development of Permeable Reactive Barriers (PRB) for application in cold regions. PRB’s are one of the most practical technologies for on-site or in-situ groundwater remediation in cold regions due to their minimal energy, monitoring and maintenance requirements. The Main Power House site has been used as a fuel storage and power generation area for the Macquarie Island research station since at least 1960. Soil analysis at the site has revealed Total Petroleum Hydrocarbon (TPH) (C9-C28) concentrations as high as 19,000 mg/kg soil. Groundwater TPH concentrations at this site can exceed 350 mg/L TPH. Ongoing migration of petroleum hydrocarbons into the neighbouring marine ecosystem resulted in the installation of a ‘funnel and gate’ PRB in November 2014. The ‘funnel and gate’ design successfully intercepted contaminated groundwater and analysis of TPH retention and biodegradation on PRB media are currently underway. Installation of the PRB facilitates research aimed at better understanding the contribution of particle attached biofilms to the remediation of groundwater systems. Bench-scale PRB system analysis at The University of Melbourne is currently examining the role biofilms play in petroleum hydrocarbon degradation, and how controlled release nutrient media can heighten the metabolic activity of biofilms in cold regions in the presence of low temperatures and low nutrient groundwater.

Keywords: groundwater, petroleum, Macquarie island, funnel and gate

Procedia PDF Downloads 352
1211 Fort Conger: A Virtual Museum and Virtual Interactive World for Exploring Science in the 19th Century

Authors: Richard Levy, Peter Dawson

Abstract:

Ft. Conger, located in the Canadian Arctic was one of the most remote 19th-century scientific stations. Established in 1881 on Ellesmere Island, a wood framed structure established a permanent base from which to conduct scientific research. Under the charge of Lt. Greely, Ft. Conger was one of 14 expeditions conducted during the First International Polar Year (FIPY). Our research project “From Science to Survival: Using Virtual Exhibits to Communicate the Significance of Polar Heritage Sites in the Canadian Arctic” focused on the creation of a virtual museum website dedicated to one of the most important polar heritage site in the Canadian Arctic. This website was developed under a grant from Virtual Museum of Canada and enables visitors to explore the fort’s site from 1875 to the present, http://fortconger.org. Heritage sites are often viewed as static places. A goal of this project was to present the change that occurred over time as each new group of explorers adapted the site to their needs. The site was first visited by British explorer George Nares in 1875 – 76. Only later did the United States government select this site for the Lady Franklin Bay Expedition (1881-84) with research to be conducted under the FIPY (1882 – 83). Still later Robert Peary and Matthew Henson attempted to reach the North Pole from Ft. Conger in 1899, 1905 and 1908. A central focus of this research is on the virtual reconstruction of the Ft. Conger. In the summer of 2010, a Zoller+Fröhlich Imager 5006i and Minolta Vivid 910 laser scanner were used to scan terrain and artifacts. Once the scanning was completed, the point clouds were registered and edited to form the basis of a virtual reconstruction. A goal of this project has been to allow visitors to step back in time and explore the interior of these buildings with all of its artifacts. Links to text, historic documents, animations, panorama images, computer games and virtual labs provide explanations of how science was conducted during the 19th century. A major feature of this virtual world is the timeline. Visitors to the website can begin to explore the site when George Nares, in his ship the HMS Discovery, appeared in the harbor in 1875. With the emergence of Lt Greely’s expedition in 1881, we can track the progress made in establishing a scientific outpost. Still later in 1901, with Peary’s presence, the site is transformed again, with the huts having been built from materials salvaged from Greely’s main building. Still later in 2010, we can visit the site during its present state of deterioration and learn about the laser scanning technology which was used to document the site. The Science and Survival at Fort Conger project represents one of the first attempts to use virtual worlds to communicate the historical and scientific significance of polar heritage sites where opportunities for first-hand visitor experiences are not possible because of remote location.

Keywords: 3D imaging, multimedia, virtual reality, arctic

Procedia PDF Downloads 406
1210 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 169
1209 Benzenepropanamine Analogues as Non-detergent Microbicidal Spermicide for Effective Pre-exposure Prophylaxis

Authors: Veenu Bala, Yashpal S. Chhonker, Bhavana Kushwaha, Rabi S. Bhatta, Gopal Gupta, Vishnu L. Sharma

Abstract:

According to UNAIDS 2013 estimate nearly 52% of all individuals living with HIV are now women of reproductive age (15–44 years). Seventy-five percent cases of HIV acquisition are through heterosexual contacts and sexually transmitted infections (STIs), attributable to unsafe sexual behaviour. Each year, an estimated 500 million people acquire atleast one of four STIs: chlamydia, gonorrhoea, syphilis and trichomoniasis. Trichomonas vaginalis (TV) is exclusively sexually transmitted in adults, accounting for 30% of STI cases and associated with pelvic inflammatory disease (PID), vaginitis and pregnancy complications in women. TV infection resulted in impaired vaginal milieu, eventually favoring HIV transmission. In the absence of an effective prophylactic HIV vaccine, prevention of new infections has become a priority. It was thought worthwhile to integrate HIV prevention and reproductive health services including unintended pregnancy protection for women as both are related with unprotected sex. Initially, nonoxynol-9 (N-9) had been proposed as a spermicidal agent with microbicidal activity but on the contrary it increased HIV susceptibility due to surfactant action. Thus, to accomplish an urgent need of novel woman controlled non-detergent microbicidal spermicides benzenepropanamine analogues have been synthesized. At first, five benzenepropanamine-dithiocarbamate hybrids have been synthesized and evaluated for their spermicidal, anti-Trichomonas and anti-fungal activities along with safety profiling to cervicovaginal cells. In order to further enhance the scope of above study benzenepropanamine was hybridized with thiourea as to introduce anti-HIV potential. The synthesized hybrid molecules were evaluated for their reverse transcriptase (RT) inhibition, spermicidal, anti-Trichomonas and antimicrobial activities as well as their safety against vaginal flora and cervical cells. simulated vaginal fluid (SVF) stability and pharmacokinetics of most potent compound versus N-9 was examined in female Newzealand (NZ) rabbits to observe its absorption into systemic circulation and subsequent exposure in blood plasma through vaginal wall. The study resulted in the most promising compound N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) exhibiting better activity profile than N-9 as it showed RT inhibition (72.30 %), anti-Trichomonas (MIC, 46.72 µM against MTZ susceptible and MIC, 187.68 µM against resistant strain), spermicidal (MEC, 0.01%) and antifungal activity (MIC, 3.12–50 µg/mL) against four fungal strains. The high safety against vaginal epithelium (HeLa cells) and compatibility with vaginal flora (lactobacillus), SVF stability and least vaginal absorption supported its suitability for topical vaginal application. Docking study was performed to gain an insight into the binding mode and interactions of the most promising compound, N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) with HIV-1 Reverse Transcriptase. The docking study has revealed that compound (29) interacted with HIV-1 RT similar to standard drug Nevirapine. It may be concluded that hybridization of benzenepropanamine and thiourea moiety resulted into novel lead with multiple activities including RT inhibition. A further lead optimization may result into effective vaginal microbicides having spermicidal, anti-Trichomonas, antifungal and anti-HIV potential altogether with enhanced safety to cervico-vaginal cells in comparison to Nonoxynol-9.

Keywords: microbicidal, nonoxynol-9, reverse transcriptase, spermicide

Procedia PDF Downloads 337
1208 One Species into Five: Nucleo-Mito Barcoding Reveals Cryptic Species in 'Frankliniella Schultzei Complex': Vector for Tospoviruses

Authors: Vikas Kumar, Kailash Chandra, Kaomud Tyagi

Abstract:

The insect order Thysanoptera includes small insects commonly called thrips. As insect vectors, only thrips are capable of Tospoviruses transmission (genus Tospovirus, family Bunyaviridae) affecting various crops. Currently, fifteen species of subfamily Thripinae (Thripidae) have been reported as vectors for tospoviruses. Frankliniella schultzei, which is reported as act as a vector for at least five tospovirses, have been suspected to be a species complex with more than one species. It is one of the historical unresolved issues where, two species namely, F. schultzei Trybom and F. sulphurea Schmutz were erected from South Africa and Srilanaka respectively. These two species were considered to be valid until 1968 when sulphurea was treated as colour morph (pale form) and synonymised under schultzei (dark form) However, these two have been considered as valid species by some of the thrips workers. Parallel studies have indicated that brown form of schultzei is a vector for tospoviruses while yellow form is a non-vector. However, recent studies have shown that yellow populations have also been documented as vectors. In view of all these facts, it is highly important to have a clear understanding whether these colour forms represent true species or merely different populations with different vector carrying capacities and whether there is some hidden diversity in 'Frankliniella schultzei species complex'. In this study, we aim to study the 'Frankliniella schultzei species complex' with molecular spectacles with DNA data from India and Australia and Africa. A total of fifty-five specimens was collected from diverse locations in India and Australia. We generated molecular data using partial fragments of mitochondrial cytochrome c oxidase I gene (mtCOI) and 28S rRNA gene. For COI dataset, there were seventy-four sequences, out of which data on fifty-five was generated in the current study and others were retrieved from NCBI. All the four different tree construction methods: neighbor-joining, maximum parsimony, maximum likelihood and Bayesian analysis, yielded the same tree topology and produced five cryptic species with high genetic divergence. For, rDNA, there were forty-five sequences, out of which data on thirty-nine was generated in the current study and others were retrieved from NCBI. The four tree building methods yielded four cryptic species with high bootstrap support value/posterior probability. Here we could not retrieve one cryptic species from South Africa as we could not generate data on rDNA from South Africa and sequence for rDNA from African region were not available in the database. The results of multiple species delimitation methods (barcode index numbers, automatic barcode gap discovery, general mixed Yule-coalescent, and Poisson-tree-processes) also supported the phylogenetic data and produced 5 and 4 Molecular Operational Taxonomic Units (MOTUs) for mtCOI and 28S dataset respectively. These results of our study indicate the likelihood that F. sulphurea may be a valid species, however, more morphological and molecular data is required on specimens from type localities of these two species and comparison with type specimens.

Keywords: DNA barcoding, species complex, thrips, species delimitation

Procedia PDF Downloads 119
1207 Plastic Behavior of Steel Frames Using Different Concentric Bracing Configurations

Authors: Madan Chandra Maurya, A. R. Dar

Abstract:

Among the entire natural calamities earthquake is the one which is most devastating. If the losses due to all other calamities are added still it will be very less than the losses due to earthquakes. So it means we must be ready to face such a situation, which is only possible if we make our structures earthquake resistant. A review of structural damages to the braced frame systems after several major earthquakes—including recent earthquakes—has identified some anticipated and unanticipated damage. This damage has prompted many engineers and researchers around the world to consider new approaches to improve the behavior of braced frame systems. Extensive experimental studies over the last fourty years of conventional buckling brace components and several braced frame specimens have been briefly reviewed, highlighting that the number of studies on the full-scale concentric braced frames is still limited. So for this reason the study surrounds the words plastic behavior, steel structure, brace frame system. In this study, there are two different analytical approaches which have been used to predict the behavior and strength of an un-braced frame. The first is referred as incremental elasto-plastic analysis a plastic approach. This method gives a complete load-deflection history of the structure until collapse. It is based on the plastic hinge concept for fully plastic cross sections in a structure under increasing proportional loading. In this, the incremental elasto-plastic analysis- hinge by hinge method is used in this study because of its simplicity to know the complete load- deformation history of two storey un-braced scaled model. After that the experiments were conducted on two storey scaled building model with and without bracing system to know the true or experimental load deformation curve of scaled model. Only way, is to understand and analyze these techniques and adopt these techniques in our structures. The study named as Plastic Behavior of Steel Frames using Different Concentric Bracing Configurations deals with all this. This study aimed at improving the already practiced traditional systems and to check the behavior and its usefulness with respect to X-braced system as reference model i.e. is how plastically it is different from X-braced. Laboratory tests involved determination of plastic behavior of these models (with and without brace) in terms of load-deformation curve. Thus, the aim of this study is to improve the lateral displacement resistance capacity by using new configuration of brace member in concentric manner which is different from conventional concentric brace. Once the experimental and manual results (using plastic approach) compared, simultaneously the results from both approach were also compared with nonlinear static analysis (pushover analysis) approach using ETABS i.e how both the previous results closely depicts the behavior in pushover curve and upto what limit. Tests results shows that all the three approaches behaves somewhat in similar manner upto yield point and also the applicability of elasto-plastic analysis (hinge by hinge method) to know the plastic behavior. Finally the outcome from three approaches shows that the newer one configuration which is chosen for study behaves in-between the plane frame (without brace or reference frame) and the conventional X-brace frame.

Keywords: elasto-plastic analysis, concentric steel braced frame, pushover analysis, ETABS

Procedia PDF Downloads 218
1206 FlameCens: Visualization of Expressive Deviations in Music Performance

Authors: Y. Trantafyllou, C. Alexandraki

Abstract:

Music interpretation accounts to the way musicians shape their performance by deliberately deviating from composers’ intentions, which are commonly communicated via some form of music transcription, such as a music score. For transcribed and non-improvised music, music expression is manifested by introducing subtle deviations in tempo, dynamics and articulation during the evolution of performance. This paper presents an application, named FlameCens, which, given two recordings of the same piece of music, presumably performed by different musicians, allow visualising deviations in tempo and dynamics during playback. The application may also compare a certain performance to the music score of that piece (i.e. MIDI file), which may be thought of as an expression-neutral representation of that piece, hence depicting the expressive queues employed by certain performers. FlameCens uses the Dynamic Time Warping algorithm to compare two audio sequences, based on CENS (Chroma Energy distribution Normalized Statistics) audio features. Expressive deviations are illustrated in a moving flame, which is generated by an animation of particles. The length of the flame is mapped to deviations in dynamics, while the slope of the flame is mapped to tempo deviations so that faster tempo changes the slope to the right and slower tempo changes the slope to the left. Constant slope signifies no tempo deviation. The detected deviations in tempo and dynamics can be additionally recorded in a text file, which allows for offline investigation. Moreover, in the case of monophonic music, the color of particles is used to convey the pitch of the notes during performance. FlameCens has been implemented in Python and it is openly available via GitHub. The application has been experimentally validated for different music genres including classical, contemporary, jazz and popular music. These experiments revealed that FlameCens can be a valuable tool for music specialists (i.e. musicians or musicologists) to investigate the expressive performance strategies employed by different musicians, as well as for music audience to enhance their listening experience.

Keywords: audio synchronization, computational music analysis, expressive music performance, information visualization

Procedia PDF Downloads 119
1205 Single Cell and Spatial Transcriptomics: A Beginners Viewpoint from the Conceptual Pipeline

Authors: Leo Nnamdi Ozurumba-Dwight

Abstract:

Messenger ribooxynucleic acid (mRNA) molecules are compositional, protein-based. These proteins, encoding mRNA molecules (which collectively connote the transcriptome), when analyzed by RNA sequencing (RNAseq), unveils the nature of gene expression in the RNA. The obtained gene expression provides clues of cellular traits and their dynamics in presentations. These can be studied in relation to function and responses. RNAseq is a practical concept in Genomics as it enables detection and quantitative analysis of mRNA molecules. Single cell and spatial transcriptomics both present varying avenues for expositions in genomic characteristics of single cells and pooled cells in disease conditions such as cancer, auto-immune diseases, hematopoietic based diseases, among others, from investigated biological tissue samples. Single cell transcriptomics helps conduct a direct assessment of each building unit of tissues (the cell) during diagnosis and molecular gene expressional studies. A typical technique to achieve this is through the use of a single-cell RNA sequencer (scRNAseq), which helps in conducting high throughput genomic expressional studies. However, this technique generates expressional gene data for several cells which lack presentations on the cells’ positional coordinates within the tissue. As science is developmental, the use of complimentary pre-established tissue reference maps using molecular and bioinformatics techniques has innovatively sprung-forth and is now used to resolve this set back to produce both levels of data in one shot of scRNAseq analysis. This is an emerging conceptual approach in methodology for integrative and progressively dependable transcriptomics analysis. This can support in-situ fashioned analysis for better understanding of tissue functional organization, unveil new biomarkers for early-stage detection of diseases, biomarkers for therapeutic targets in drug development, and exposit nature of cell-to-cell interactions. Also, these are vital genomic signatures and characterizations of clinical applications. Over the past decades, RNAseq has generated a wide array of information that is igniting bespoke breakthroughs and innovations in Biomedicine. On the other side, spatial transcriptomics is tissue level based and utilized to study biological specimens having heterogeneous features. It exposits the gross identity of investigated mammalian tissues, which can then be used to study cell differentiation, track cell line trajectory patterns and behavior, and regulatory homeostasis in disease states. Also, it requires referenced positional analysis to make up of genomic signatures that will be sassed from the single cells in the tissue sample. Given these two presented approaches to RNA transcriptomics study in varying quantities of cell lines, with avenues for appropriate resolutions, both approaches have made the study of gene expression from mRNA molecules interesting, progressive, developmental, and helping to tackle health challenges head-on.

Keywords: transcriptomics, RNA sequencing, single cell, spatial, gene expression.

Procedia PDF Downloads 114
1204 A Study of Industrial Symbiosis and Implementation of Indigenous Circular Economy Technique on an Indian Industrial Area

Authors: A. Gokulram

Abstract:

Industrial waste is often categorized as commercial and non-commercial waste by market value. In many Indian industries and other industrialized countries, the commercial value waste is capitalized and non-commercial waste is dumped to landfill. A lack of adequate research on industrial waste leads to the failure of effective resource management and the non-commercial waste are being considered as commercially non-viable residues. The term Industrial symbiosis refers to the direct inter-firm reuse or exchange of material and energy resource. The resource efficiency of commercial waste is mainly followed by an informal symbiosis in our research area. Some Industrial residues are reused within the facility where they are generated, others are reused directly nearby industrial facilities and some are recycled via the formal and informal market. The act of using industrial waste as a resource for another product faces challenges in India. This research study has observed a major negligence of trust and communication among several bodies to implement effective circular economy in India. This study applies interviewing process across researchers, government bodies, industrialist and designers to understand the challenges of circular economy in India. The study area encompasses an industrial estate in Ahmedabad in the state of Gujarat which comprises of 1200 industries. The research study primarily focuses on making industrial waste as commercial ready resource and implementing Indigenous sustainable practice in modern context to improve resource efficiency. This study attempted to initiate waste exchange platform among several industrialist and used varied methodologies from mail questionnaire to telephone survey. This study makes key suggestions to policy change and sustainable finance to improve circular economy in India.

Keywords: effective resource management, environmental policy, indigenous technique, industrial symbiosis, sustainable finance

Procedia PDF Downloads 125
1203 Study of the Physicochemical Characteristics of Liquid Effluents from the El Jadida Wastewater Treatment Plant

Authors: Aicha Assal, El Mostapha Lotfi

Abstract:

Rapid industrialization and population growth are currently the main causes of energy and environmental problems associated with wastewater treatment. Wastewater treatment plants (WWTPs) aim to treat wastewater before discharging it into the environment, but they are not yet capable of treating non-biodegradable contaminants such as heavy metals. Toxic heavy metals can disrupt biological processes in WWTPs. Consequently, it is crucial to combine additional physico-chemical treatments with WWTPs to ensure effective wastewater treatment. In this study, the authors examined the pretreatment process for urban wastewater generated by the El Jadida WWTP in order to assess its treatment efficiency. Various physicochemical and spatiotemporal parameters of the WWTP's raw and treated water were studied, including temperature, pH, conductivity, biochemical oxygen demand (BOD5), chemical oxygen demand (COD), suspended solids (SS), total nitrogen, and total phosphorus. The results showed an improvement in treatment yields, with measured performance values of 77% for BOD5, 63% for COD, and 66% for TSS. However, spectroscopic analyses revealed persistent coloration in wastewater samples leaving the WWTP, as well as the presence of heavy metals such as Zn, cadmium, chromium, and cobalt, detected by inductively coupled plasma optical emission spectroscopy (ICP-OES). To remedy these staining problems and reduce the presence of heavy metals, a new low-cost, environmentally-friendly eggshell-based solution was proposed. This method eliminated most heavy metals such as cobalt, beryllium, silver, and copper and significantly reduced the amount of cadmium, lead, chromium, manganese, aluminium, and Zn. In addition, the bioadsorbent was able to decolorize wastewater by up to 84%. This adsorption process is, therefore, of great interest for ensuring the quality of wastewater and promoting its reuse in irrigation.

Keywords: WWTP, wastewater, heavy metals, decoloration, depollution, COD, BOD5

Procedia PDF Downloads 53
1202 Decarboxylation of Waste Coconut Oil and Comparison of Acid Values

Authors: Pabasara H. Gamage, Sisira K. Weliwegamage, Sameera R. Gunatilake, Hondamuni I. C De Silva, Parakrama Karunaratne

Abstract:

Green diesel is an upcoming category of biofuels, which has more practical advantages than biodiesel. Production of green diesel involves production of hydrocarbons from various fatty acid sources. Though green diesel is chemically similar to fossil fuel hydrocarbons, it is more environmentally friendly. Decarboxylation of fatty acid sources is one of green diesel production methods and is less expensive and more energy efficient compared to hydrodeoxygenation. Free fatty acids (FFA), undergo decarboxylation readily than triglycerides. Waste coconut oil, which is a rich source of FFA, can be easily decarboxylated than other oils which have lower FFA contents. These free fatty acids can be converted to hydrocarbons by decarboxylation. Experiments were conducted to carry out decarboxylation of waste coconut oil in a high pressure hastealloy reactor (Toption Goup LTD), in the presence of soda lime and mixtures of soda lime and alumina. Acid value (AV) correlates to the amount of FFA available in a sample of oil. It can be shown that with the decreasing of AV, FFAs have converted to hydrocarbons. First, waste coconut oil was reacted with soda lime alone, at 150 °C, 200 °C, and 250 °C and 1.2 MPa pressure for 2 hours. AVs of products at different temperatures were compared. AV of products decreased with increasing temperature. Thereafter, different mixtures of soda lime and alumina (100% Soda lime, 1:1 soda lime and alumina and 100% alumina) were employed at temperatures 150 °C, 200 °C, and 250 °C and 1.2 MPa pressure. The lowest AV of 2.99±0.03 was obtained when 1:1 soda lime and alumina were employed at 250 °C. It can be concluded with respect to the AV that the amount of FFA decreased when decarboxylation temperature was increased. Soda lime:alumina 1:1 mixture showed the lowest AV among the compositions studied. These findings lead to formulate a method to successfully synthesize hydrocarbons by decarboxylating waste coconut oil in the presence of soda lime and alumina (1:1) at elevated tempertaures such as 250 °C.

Keywords: acid value, free fatty acids, green diesel, high pressure reactor, waste coconut oil

Procedia PDF Downloads 292
1201 Bio-Mimetic Foam Fractionation Technology for the Treatment of Per- and PolyFluoroAlkyl Substances (PFAS) in Contaminated Water

Authors: Hugo Carronnier, Wassim Almouallem, Eric Branquet

Abstract:

Per- and polyfluoroalkyl Substances (PFAS) are a group of man-made refractory compounds that have been widely used in a variety of industrial and commercial products since the 1940s, leading to contamination of groundwater and surface water systems. They are persistent, bioaccumulative and toxic chemicals. Foam fractionation is a potential remedial technique for treating PFAS-contaminated water, taking advantage of the high surface activity to remove them from the solution by adsorption onto the surface of the air bubbles. Nevertheless, traditional foam fractionation technology developed for PFAS is challenging and found to be ineffective in treating the less surface-active compounds. Different chemicals were the subject of investigation as amendments to achieve better removal. However, most amendments are toxic, expensive and complicated to use. In this situation, patent-pending PFAS technology overcomes these challenges by using rather biological amendments. Results from the first laboratory trial showed remarkable results using a simple and cheap BioFoam Fractionation (BioFF) process based on biomimetics. The study showed that the BioFF process is effective in removing greater than 99% of PFOA (C8), PFOS (C8), PFHpS (C7) and PFHxS (C6) in PFAS-contaminated water. For other PFAS such as PFDA (C10) and 6:2 FTAB, a slightly less stable removal between 94% and 96% was achieved while between 34% and 73% removal efficiency was observed for PFBA (C4), PFBS (C4), PFHxA (C6), and Gen-X. In sum, the advantages of the BioFF presented as a low-waste production, a cost and energy-efficient operation and the use of a biodegradable amendment requiring no separation step after treatment, coupled with these first findings, suggest that the BioFF process is a highly applicable treatment technology for PFAS contaminated water. Additional investigations are currently carried on in order to optimize the process and establish a promising strategy for on-site PFAS remediation.

Keywords: PFAS, treatment, foam fractionation, contaminated amendments

Procedia PDF Downloads 66
1200 Human Behavioral Assessment to Derive Land-Use for Sustenance of River in India

Authors: Juhi Sah

Abstract:

Habitat is characterized by the inter-dependency of environmental elements. Anthropocentric development approach is increasing our vulnerability towards natural hazards. Hence, manmade interventions should have a higher level of sensitivity towards the natural settings. Sensitivity towards the environment can be assessed by the behavior of the stakeholders involved. This led to the establishment of a hypothesis: there exists a legitimate relationship between the behavioral sciences, land use evolution and environment conservation, in the planning process. An attempt has been made to establish this relationship by reviewing the existing set of knowledge and case examples pertaining to the three disciplines under inquiry. Understanding the scarce & deteriorating nature of fresh-water reserves of earth and experimenting the above concept, a case study of a growing urban center's river flood plain is selected, in a developing economy, India. Cases of urban flooding in Chennai, Delhi and other mega cities of India, imposes a high risk on the unauthorized settlement, on the floodplains of the rivers. The issue addressed here is the encroachment of floodplains, through psychological enlightenment and modification through knowledge building. The reaction of an individual or society can be compared to a cognitive process. This study documents all the stakeholders' behavior and perception for their immediate natural environment (water body), and produce various land uses suitable along a river in an urban settlement as per different stakeholder's perceptions. To assess and induce morally responsible behavior in a community (small scale or large scale), tools of psychological inquiry is used for qualitative analysis. The analysis will deal with varied data sets from two sectors namely: River and its geology, Land use planning and regulation. Identification of a distinctive pattern in the built up growth, river ecology degradation, and human behavior, by handling large quantum of data from the diverse sector and comments on the availability of relevant data and its implications, has been done. Along the whole river stretch, condition and usage of its bank vary, hence stakeholder specific survey questionnaires have been prepared to accurately map the responses and habits of the rational inhabitants. A conceptual framework has been designed to move forward with the empirical analysis. The classical principle of virtues says "virtue of a human depends on its character" but another concept defines that the behavior or response is a derivative of situations and to bring about a behavioral change one needs to introduce a disruption in the situation/environment. Owing to the present trends, blindly following the results of data analytics and using it to construct policy, is not proving to be in favor of planned development and natural resource conservation. Thus behavioral assessment of the rational inhabitants of the planet is also required, as their activities and interests have a large impact on the earth's pre-set systems and its sustenance.

Keywords: behavioral assessment, flood plain encroachment, land use planning, river sustenance

Procedia PDF Downloads 107
1199 Oxidation and Reduction Kinetics of Ni-Based Oxygen Carrier for Chemical Looping Combustion

Authors: J. H. Park, R. H. Hwang, K. B. Yi

Abstract:

Carbon Capture and Storage (CCS) is one of the important technology to reduce the CO₂ emission from large stationary sources such as a power plant. Among the carbon technologies for power plants, chemical looping combustion (CLC) has attracted much attention due to a higher thermal efficiency and a lower cost of electricity. A CLC process is consists of a fuel reactor and an air reactor which are interconnected fluidized bed reactor. In the fuel reactor, an oxygen carrier (OC) is reduced by fuel gas such as CH₄, H₂, CO. And the OC is send to air reactor and oxidized by air or O₂ gas. The oxidation and reduction reaction of OC occurs between the two reactors repeatedly. In the CLC system, high concentration of CO₂ can be easily obtained by steam condensation only from the fuel reactor. It is very important to understand the oxidation and reduction characteristics of oxygen carrier in the CLC system to determine the solids circulation rate between the air and fuel reactors, and the amount of solid bed materials. In this study, we have conducted the experiment and interpreted oxidation and reduction reaction characteristics via observing weight change of Ni-based oxygen carrier using the TGA with varying as concentration and temperature. Characterizations of the oxygen carrier were carried out with BET, SEM. The reaction rate increased with increasing the temperature and increasing the inlet gas concentration. We also compared experimental results and adapted basic reaction kinetic model (JMA model). JAM model is one of the nucleation and nuclei growth models, and this model can explain the delay time at the early part of reaction. As a result, the model data and experimental data agree over the arranged conversion and time with overall variance (R²) greater than 98%. Also, we calculated activation energy, pre-exponential factor, and reaction order through the Arrhenius plot and compared with previous Ni-based oxygen carriers.

Keywords: chemical looping combustion, kinetic, nickel-based, oxygen carrier, spray drying method

Procedia PDF Downloads 197
1198 Performance Monitoring and Environmental Impact Analysis of a Photovoltaic Power Plant: A Numerical Modeling Approach

Authors: Zahzouh Zoubir

Abstract:

The widespread adoption of photovoltaic panel systems for global electricity generation is a prominent trend. Algeria, demonstrating steadfast commitment to strategic development and innovative projects for harnessing solar energy, emerges as a pioneering force in the field. Heat and radiation, being fundamental factors in any solar system, are currently subject to comprehensive studies aiming to discern their genuine impact on crucial elements within photovoltaic systems. This endeavor is particularly pertinent given that solar module performance is exclusively assessed under meticulously defined Standard Test Conditions (STC). Nevertheless, when deployed outdoors, solar modules exhibit efficiencies distinct from those observed under STC due to the influence of diverse environmental factors. This discrepancy introduces ambiguity in performance determination, especially when surpassing test conditions. This article centers on the performance monitoring of an Algerian photovoltaic project, specifically the Oued El Keberite power (OKP) plant boasting a 15 megawatt capacity, situated in the town of Souk Ahras in eastern Algeria. The study elucidates the behavior of a subfield within this facility throughout the year, encompassing various conditions beyond the STC framework. To ensure the optimal efficiency of solar panels, this study integrates crucial factors, drawing on an authentic technical sheet from the measurement station of the OKP photovoltaic plant. Numerical modeling and simulation of a sub-field of the photovoltaic station were conducted using MATLAB Simulink. The findings underscore how radiation intensity and temperature, whether low or high, impact the short-circuit current, open-circuit voltage; fill factor, and overall efficiency of the photovoltaic system.

Keywords: performance monitoring, photovoltaic system, numerical modeling, radiation intensity

Procedia PDF Downloads 54
1197 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs

Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.

Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour

Procedia PDF Downloads 76
1196 Modelling of Damage as Hinges in Segmented Tunnels

Authors: Gelacio JuáRez-Luna, Daniel Enrique GonzáLez-RamíRez, Enrique Tenorio-Montero

Abstract:

Frame elements coupled with springs elements are used for modelling the development of hinges in segmented tunnels, the spring elements modelled the rotational, transversal and axial failure. These spring elements are equipped with constitutive models to include independently the moment, shear force and axial force, respectively. These constitutive models are formulated based on damage mechanics and experimental test reported in the literature review. The mesh of the segmented tunnels was discretized in the software GID, and the nonlinear analyses were carried out in the finite element software ANSYS. These analyses provide the capacity curve of the primary and secondary lining of a segmented tunnel. Two numerical examples of segmented tunnels show the capability of the spring elements to release energy by the development of hinges. The first example is a segmental concrete lining discretized with frame elements loaded until hinges occurred in the lining. The second example is a tunnel with primary and secondary lining, discretized with a double ring frame model. The outer ring simulates the segmental concrete lining and the inner ring simulates the secondary cast-in-place concrete lining. Spring elements also modelled the joints between the segments in the circumferential direction and the ring joints, which connect parallel adjacent rings. The computed load vs displacement curves are congruent with numerical and experimental results reported in the literature review. It is shown that the modelling of a tunnel with primary and secondary lining with frame elements and springs provides reasonable results and save computational cost, comparing with 2D or 3D models equipped with smeared crack models.

Keywords: damage, hinges, lining, tunnel

Procedia PDF Downloads 378
1195 Water Droplet Impact on Vibrating Rigid Superhydrophobic Surfaces

Authors: Jingcheng Ma, Patricia B. Weisensee, Young H. Shin, Yujin Chang, Junjiao Tian, William P. King, Nenad Miljkovic

Abstract:

Water droplet impact on surfaces is a ubiquitous phenomenon in both nature and industry. The transfer of mass, momentum and energy can be influenced by the time of contact between droplet and surface. In order to reduce the contact time, we study the influence of substrate motion prior to impact on the dynamics of droplet recoil. Using optical high speed imaging, we investigated the impact dynamics of macroscopic water droplets (~ 2mm) on rigid nanostructured superhydrophobic surfaces vibrating at 60 – 300 Hz and amplitudes of 0 – 3 mm. In addition, we studied the influence of the phase of the substrate at the moment of impact on total contact time. We demonstrate that substrate vibration can alter droplet dynamics, and decrease total contact time by as much as 50% compared to impact on stationary rigid superhydrophobic surfaces. Impact analysis revealed that the vibration frequency mainly affected the maximum contact time, while the amplitude of vibration had little direct effect on the contact time. Through mathematical modeling, we show that the oscillation amplitude influences the possibility density function of droplet impact at a given phase, and thus indirectly influences the average contact time. We also observed more vigorous droplet splashing and breakup during impact at larger amplitudes. Through semi-empirical mathematical modeling, we describe the relationship between contact time and vibration frequency, phase, and amplitude of the substrate. We also show that the maximum acceleration during the impact process is better suited as a threshold parameter for the onset of splashing than a Weber-number criterion. This study not only provides new insights into droplet impact physics on vibrating surfaces, but develops guidelines for the rational design of surfaces to achieve controllable droplet wetting in applications utilizing vibration.

Keywords: contact time, impact dynamics, oscillation, pear-shape droplet

Procedia PDF Downloads 446
1194 The Significance of Cultural Risks for Western Consultants Executing Gulf Cooperation Council Megaprojects

Authors: Alan Walsh, Peter Walker

Abstract:

Differences in commercial, professional and personal cultural traditions between western consultants and project sponsors in the Gulf Cooperation Council (GCC) region are potentially significant in the workplace, and this can impact on project outcomes. These cultural differences can, for example, result in conflict amongst senior managers, which can negatively impact the megaproject. New entrants to the GCC often experience ‘culture shock’ as they attempt to integrate into their unfamiliar environments. Megaprojects are unique ventures with individual project characteristics, which need to be considered when managing their associated risks. Megaproject research to date has mostly ignored the significance of the absence of cultural congruence in the GCC, which is surprising considering that there are large volumes of megaprojects in various stages of construction in the GCC. An initial step to dealing with cultural issues is to acknowledge culture as a significant risk factor (SRF). This paper seeks to understand the criticality for western consultants to address these risks. It considers the cultural barriers that exist between GCC sponsors and western consultants and examines the cultural distance between the key actors. Initial findings suggest the presence to a certain extent of ethnocentricity. Other cultural clashes arise out of a lack of appreciation of the customs, practices and traditions of ‘the Other’, such as the need for avoiding public humiliation and the hierarchal significance rankings. The concept and significance of cultural shock as part of the integration process for new arrivals are considered. Culture shock describes the state of anxiety and frustration resulting from the immersion in a culture distinctly different from one's own. There are potentially substantial project risks associated with underestimating the process of cultural integration. This paper examines two distinct but intertwined issues: the societal and professional culture differences associated with expatriate assignments. A case study examines the cultural congruences between GCC sponsors and American, British and German consultants, over a ten-year cycle. This provides indicators as to which nationalities encountered the most profound cultural issues and the nature of these. GCC megaprojects are typically intensive fast track demanding ventures, where consultant turnover is high. The study finds that building trust-filled relationships is key to successful project team integration and therefore, to successful megaproject execution. Findings indicate that both professional and social inclusion processes have steep learning curves. Traditional risk management practice is to approach any uncertainty in a structured way to mitigate the potential impact on project outcomes. This research highlights cultural risk as a significant factor in the management of GCC megaprojects. These risks arising from high staff turnover typically include loss of project knowledge, delays to the project, cost and disruption in replacing staff. This paper calls for cultural risk to be recognised as an SRF, as the first step to developing risk management strategies, and to reduce staff turnover for western consultants in GCC megaprojects.

Keywords: western consultants in megaprojects, national culture impacts on GCC megaprojects, significant risk factors in megaprojects, professional culture in megaprojects

Procedia PDF Downloads 125
1193 Simulation and Performance Evaluation of Transmission Lines with Shield Wire Segmentation against Atmospheric Discharges Using ATPDraw

Authors: Marcio S. da Silva, Jose Mauricio de B. Bezerra, Antonio E. de A. Nogueira

Abstract:

This paper aims to make a performance analysis of shield wire transmission lines against atmospheric discharges when it is made the option of sectioning the shield wire and verify if the tolerability of the change. As a goal of this work, it was established to make complete modeling of a transmission line in the ATPDraw program with shield wire grounded in all the towers and in some towers. The methodology used to make the proposed evaluation was to choose an actual transmission line that served as a case study. From the choice of transmission line and verification of all its topology and materials, complete modeling of the line using the ATPDraw software was performed. Then several atmospheric discharges were simulated by striking the grounded shield wires in each tower. These simulations served to identify the behavior of the existing line against atmospheric discharges. After this first analysis, the same line was reconsidered with shield wire segmentation. The shielding wire segmentation technique aims to reduce induced losses in shield wires and is adopted in some transmission lines in Brazil. With the same conditions of atmospheric discharge the transmission line, this time with shield wire segmentation was again evaluated. The results obtained showed that it is possible to obtain similar performances against atmospheric discharges between a shield wired line in multiple towers and the same line with shield wire segmentation if some precautions are adopted as verification of the ground resistance of the wire segmented shield, adequacy of the maximum length of the segmented gap, evaluation of the separation length of the electrodes of the insulator spark, among others. As a conclusion, it is verified that since the correct assessment and adopted the correct criteria of adjustment a transmission line with shielded wire segmentation can perform very similar to the traditional use with multiple earths. This solution contributes in a very important way to the reduction of energy losses in transmission lines.

Keywords: atmospheric discharges, ATPDraw, shield wire, transmission lines

Procedia PDF Downloads 157