Search results for: exponential interpolation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 535

Search results for: exponential interpolation

115 Metabolic Manipulation as a Strategy for Optimization of Biomass Productivity and Oil Content in the Microalgae Desmodesmus Sp.

Authors: Ivan A. Sandoval Salazar, Silvia F. Valderrama

Abstract:

The microalgae oil emerges as a promising source of raw material for many industrial applications. Thus, this study had as a main focus on the cultivation of the microalgae species Desmodesmus sp. in laboratory scale with a view to maximizing biomass production and triglyceride content in the lipid fraction. Initially, culture conditions were selected to optimize biomass production, which was subsequently subjected to nutritional stress by varying nitrate and phosphate concentrations in order to increase the content and productivity of fatty acids. The culture medium BOLD 3N, nitrate and phosphate, light intensity 250,500 and 1000 μmol photons.m².s⁻¹, photoperiod of 12:12 were evaluated. Under the best conditions of the tests, a maximum cell division of 1.13 div.dia⁻¹ was obtained on the sixth day of culture, beginning of the exponential phase, and a maximum concentration of 8.42x107 cell.mL⁻¹ and dry biomass of 3.49 gL⁻¹ on the 20th day, in the stationary phase. The lipid content in the first stage of culture was approximately 8% after 12 days and at the end of the culture in the stationary phase ranged from 12% to 16% (20 days). In the microalgae grown at 250 μmol fotons.m2.s-1 the fatty acid profile was mostly polyunsaturated (52%). The total of unsaturated fatty acids, identified in this species of microalga, reached values between 70 and 75%, being qualified for use in the food and pharmaceutical industry. In addition, this study showed that the cultivation conditions influenced mainly the production of polyunsaturated fatty acids, with the predominance of γ-linolenic acid. However, in the cultures submitted to the highest the intensity of light (1000 μmol photons.m².s⁻¹) and low concentrations of nitrate and phosphate, saturated and monounsaturated fatty acids, which present greater oxidative stability, were identified mainly (60 to 70 %) being qualified for the production of biodiesel and for oleochemistry.

Keywords: microalgae, Desmodesmus sp, fatty acids, biodiesel

Procedia PDF Downloads 115
114 Numerical Investigation of Turbulent Inflow Strategy in Wind Energy Applications

Authors: Arijit Saha, Hassan Kassem, Leo Hoening

Abstract:

Ongoing climate change demands the increasing use of renewable energies. Wind energy plays an important role in this context since it can be applied almost everywhere in the world. To reduce the costs of wind turbines and to make them more competitive, simulations are very important since experiments are often too costly if at all possible. The wind turbine on a vast open area experiences the turbulence generated due to the atmosphere, so it was of utmost interest from this research point of view to generate the turbulence through various Inlet Turbulence Generation methods like Precursor cyclic and Kaimal Spectrum Exponential Coherence (KSEC) in the computational simulation domain. To be able to validate computational fluid dynamic simulations of wind turbines with the experimental data, it is crucial to set up the conditions in the simulation as close to reality as possible. This present work, therefore, aims at investigating the turbulent inflow strategy and boundary conditions of KSEC and providing a comparative analysis alongside the Precursor cyclic method for Large Eddy Simulation within the context of wind energy applications. For the generation of the turbulent box through KSEC method, firstly, the constrained data were collected from an auxiliary channel flow, and later processing was performed with the open-source tool PyconTurb, whereas for the precursor cyclic, only the data from the auxiliary channel were sufficient. The functionality of these methods was studied through various statistical properties such as variance, turbulent intensity, etc with respect to different Bulk Reynolds numbers, and a conclusion was drawn on the feasibility of KSEC method. Furthermore, it was found necessary to verify the obtained data with DNS case setup for its applicability to use it as a real field CFD simulation.

Keywords: Inlet Turbulence Generation, CFD, precursor cyclic, KSEC, large Eddy simulation, PyconTurb

Procedia PDF Downloads 58
113 Improved Regression Relations Between Different Magnitude Types and the Moment Magnitude in the Western Balkan Earthquake Catalogue

Authors: Anila Xhahysa, Migena Ceyhan, Neki Kuka, Klajdi Qoshi, Damiano Koxhaj

Abstract:

The seismic event catalog has been updated in the framework of a bilateral project supported by the Central European Investment Fund and with the extensive support of Global Earthquake Model Foundation to update Albania's national seismic hazard model. The earthquake catalogue prepared within this project covers the Western Balkan area limited by 38.0° - 48°N, 12.5° - 24.5°E and includes 41,806 earthquakes that occurred in the region between 510 BC and 2022. Since the moment magnitude characterizes the earthquake size accurately and the selected ground motion prediction equations for the seismic hazard assessment employ this scale, it was chosen as the uniform magnitude scale for the catalogue. Therefore, proxy values of moment magnitude had to be obtained by using new magnitude conversion equations between the local and other magnitude types to this unified scale. The Global Centroid Moment Tensor Catalogue was considered the most authoritative for moderate to large earthquakes for moment magnitude reports; hence it was used as a reference for calibrating other sources. The best fit was observed when compared to some regional agencies, whereas, with reports of moment magnitudes from Italy, Greece and Turkey, differences were observed in all magnitude ranges. For teleseismic magnitudes, to account for the non-linearity of the relationships, we used the exponential model for the derivation of the regression equations. The obtained regressions for the surface wave magnitude and short-period body-wave magnitude show considerable differences with Global Earthquake Model regression curves, especially for low magnitude ranges. Moreover, a conversion relation was obtained between the local magnitude of Albania and the corresponding moment magnitude as reported by the global and regional agencies. As errors were present in both variables, the Deming regression was used.

Keywords: regression, seismic catalogue, local magnitude, tele-seismic magnitude, moment magnitude

Procedia PDF Downloads 34
112 Oxidation and Reduction Kinetics of Ni-Based Oxygen Carrier for Chemical Looping Combustion

Authors: J. H. Park, R. H. Hwang, K. B. Yi

Abstract:

Carbon Capture and Storage (CCS) is one of the important technology to reduce the CO₂ emission from large stationary sources such as a power plant. Among the carbon technologies for power plants, chemical looping combustion (CLC) has attracted much attention due to a higher thermal efficiency and a lower cost of electricity. A CLC process is consists of a fuel reactor and an air reactor which are interconnected fluidized bed reactor. In the fuel reactor, an oxygen carrier (OC) is reduced by fuel gas such as CH₄, H₂, CO. And the OC is send to air reactor and oxidized by air or O₂ gas. The oxidation and reduction reaction of OC occurs between the two reactors repeatedly. In the CLC system, high concentration of CO₂ can be easily obtained by steam condensation only from the fuel reactor. It is very important to understand the oxidation and reduction characteristics of oxygen carrier in the CLC system to determine the solids circulation rate between the air and fuel reactors, and the amount of solid bed materials. In this study, we have conducted the experiment and interpreted oxidation and reduction reaction characteristics via observing weight change of Ni-based oxygen carrier using the TGA with varying as concentration and temperature. Characterizations of the oxygen carrier were carried out with BET, SEM. The reaction rate increased with increasing the temperature and increasing the inlet gas concentration. We also compared experimental results and adapted basic reaction kinetic model (JMA model). JAM model is one of the nucleation and nuclei growth models, and this model can explain the delay time at the early part of reaction. As a result, the model data and experimental data agree over the arranged conversion and time with overall variance (R²) greater than 98%. Also, we calculated activation energy, pre-exponential factor, and reaction order through the Arrhenius plot and compared with previous Ni-based oxygen carriers.

Keywords: chemical looping combustion, kinetic, nickel-based, oxygen carrier, spray drying method

Procedia PDF Downloads 170
111 Using Photogrammetric Techniques to Map the Mars Surface

Authors: Ahmed Elaksher, Islam Omar

Abstract:

For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.

Keywords: mars, photogrammetry, MOLA, HiRISE

Procedia PDF Downloads 30
110 Cosmetic Dermatology Procedures: Survey Results of American Society for Dermatologic Surgery

Authors: Marina S. Basta, Kirollos S. Basta

Abstract:

Cosmetic dermatology procedures have witnessed exponential growth and diversification over the last 10 years. Thus, the purpose of this study was to collect data about the latest trends for cosmetic procedures reported by dermatologists during the year 2018. This study was performed by American Society for Dermatologic Surgery (ASDS) in 2018 through sending survey invitations to 3,358 practicing dermatologists in the U.S. containing streamline questions as well as statistical questions targeted to specific analysis of cosmetic dermatology trends. Out of the targeted physicians, only 596 dermatologists reply to the survey invitation (15% overall response rate). It was noted that data collected from that survey was generalized to represent all ASDS members. Results show that there is an increase in cosmetic dermatology procedures since 12.5 million procedures were reported for 2018 compared to only 7.8 million for 2012. Injectable neuromodulators and soft tissue fillers have topped the list with a 3.7 million procedure count. Body sculpting, chemical peeling, hair transplantation, and microneedling procedures were reported to be 1.57 million cases combined. Also, the top two procedures using laser were represented in wrinkle treatment as well as sun damage correction, while the lowest two trends for laser usage were for treatments of tattoos and birthmarks. Cryolipolysis was found to be at the head of body sculpting procedures with 287,435 cases, while tumescent liposuction was reported as the least performed body sculpting procedure (18,286 cases). In conclusion, comparing the procedural trends for the last 7 years has indicated that there has been a 78% increase in soft tissue filler treatment compared to 2012. In addition, it was further noted that laser procedures scored 74% increase in the last 7 years while body contouring procedures have had four folds increase in general compared to 2012.

Keywords: cosmetic dermatology, ASDS procedure survey, laser, body sculpting

Procedia PDF Downloads 90
109 An Overview of Domain Models of Urban Quantitative Analysis

Authors: Mohan Li

Abstract:

Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.

Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design

Procedia PDF Downloads 142
108 The LNG Paradox: The Role of Gas in the Energy Transition

Authors: Ira Joseph

Abstract:

The LNG paradox addresses the issue of how the most expensive form of gas supply, which is LNG, will grow in an end user market where demand is most competitive, which is power generation. In this case, LNG demand growth is under siege from two entirely different directions. At one end is price; it will be extremely difficult for gas to replace coal in Asia due to the low price of coal and the age of the generation plants. Asia's coal fleet, on average, is less than two decades old and will need significant financial incentives to retire before its state lifespan. While gas would cut emissions in half relative to coal, it would also more than double the price of the fuel source for power generation, which puts it in a precarious position. In most countries in Asia other than China, this cost increase, particularly from imports, is simply not realistic when it is also necessary to focus on economic growth and social welfare. On the other end, renewables are growing at an exponential rate for three reasons. One is that prices are dropping. Two is that policy incentives are driving deployment, and three is that China is forcing renewables infrastructure into the market to take a political seat at the global energy table with Saudi Arabia, the US, and Russia. Plus, more renewables will lower import growth of oil and gas in China, if not end it altogether. Renewables are the predator at the gate of gas demand in power generation and in every year that passes, renewables cut into demand growth projections for gas; in particular, the type of gas that is most expensive, which is LNG. Gas does have a role in the future, particularly within a domestic market. Once it crosses borders in the form of LNG or even pipeline gas, it quickly becomes a premium fuel and must be marketed and used this way. Our research shows that gas will be able to compete with batteries as an intermittency and storage tool and does offer a method to harmonize with renewables as part of the energy transition. As a baseload fuel, however, the role of gas, particularly, will be limited by cost once it needs to cross a border. Gas converted into blue or green hydrogen or ammonia is also an option for storage depending on the location. While this role is much reduced from the primary baseload role that gas once aspired to land, it still offers a credible option for decades to come.

Keywords: natural gas, LNG, demand, price, intermittency, storage, renewables

Procedia PDF Downloads 27
107 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling

Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng

Abstract:

This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.

Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT

Procedia PDF Downloads 51
106 Leveraging on Application of Customer Relationship Management Strategy as Business Driving Force: A Case Study of Major Industries

Authors: Odunayo S. Faluse, Roger Telfer

Abstract:

Customer relationship management is a business strategy that is centred on the idea that ‘Customer is the driving force of any business’ i.e. Customer is placed in a central position in any business. However, this belief coupled with the advancement in information technology in the past twenty years has experienced a change. In any form of business today it can be concluded that customers are the modern dictators to whom the industry always adjusts its business operations due to the increase in availability of information, intense market competition and ever growing negotiating ideas of customers in the process of buying and selling. The most vital role of any organization is to satisfy or meet customer’s needs and demands, which eventually determines customer’s long-term value to the industry. Therefore, this paper analyses and describes the application of customer relationship management operational strategies in some of the major industries in business. Both developed and up-coming companies nowadays value the quality of customer services and client’s loyalty, they also recognize the customers that are not very sensitive when it comes to changes in price and thereby realize that attracting new customers is more tasking and expensive than retaining the existing customers. However, research shows that several factors have recently amounts to the sudden rise in the execution of CRM strategies in the marketplace, such as a diverted attention of some organization towards integrating ideas in retaining existing customers rather than attracting new one, gathering data about customers through the use of internal database system and acquiring of external syndicate data, also exponential increase in technological intelligence. Apparently, with this development in business operations, CRM research in Academia remain nascent; hence this paper gives detailed critical analysis of the recent advancement in the use of CRM and key research opportunities for future development in using the implementation of CRM as a determinant factor for successful business optimization.

Keywords: agriculture, banking, business strategies, CRM, education, healthcare

Procedia PDF Downloads 197
105 Preliminary WRF SFIRE Simulations over Croatia during the Split Wildfire in July 2017

Authors: Ivana Čavlina Tomašević, Višnjica Vučetić, Maja Telišman Prtenjak, Barbara Malečić

Abstract:

The Split wildfire on the mid-Adriatic Coast in July 2017 is one of the most severe wildfires in Croatian history, given the size and unexpected fire behavior, and it is used in this research as a case study to run the Weather Research and Forecasting Spread Fire (WRF SFIRE) model. This coupled fire-atmosphere model was successfully run for the first time ever for one Croatian wildfire case. Verification of coupled simulations was possible by using the detailed reconstruction of the Split wildfire. Specifically, precise information on ignition time and location, together with mapped fire progressions and spotting within the first 30 hours of the wildfire, was used for both – to initialize simulations and to evaluate the model’s ability to simulate fire’s propagation and final fire scar. The preliminary simulations were obtained using high-resolution vegetation and topography data for the fire area, additionally interpolated to fire grid spacing at 33.3 m. The results demonstrated that the WRF SFIRE model has the ability to work with real data from Croatia and produce adequate results for forecasting fire spread. As the model in its setup has the ability to include and exclude the energy fluxes between the fire and the atmosphere, this was used to investigate possible fire-atmosphere interactions during the Split wildfire. Finally, successfully coupled simulations provided the first numerical evidence that a wildfire from the Adriatic coast region can modify the dynamical structure of the surrounding atmosphere, which agrees with observations from fire grounds. This study has demonstrated that the WRF SFIRE model has the potential for operational application in Croatia with more accurate fire predictions in the future, which could be accomplished by inserting the higher-resolution input data into the model without interpolation. Possible uses for fire management in Croatia include prediction of fire spread and intensity that may vary under changing weather conditions, available fuels and topography, planning effective and safe deployment of ground and aerial firefighting forces, preventing wildland-urban interface fires, effective planning of evacuation routes etc. In addition, the WRF SFIRE model results from this research demonstrated that the model is important for fire weather research and education purposes in order to better understand this hazardous phenomenon that occurs in Croatia.

Keywords: meteorology, agrometeorology, fire weather, wildfires, couple fire-atmosphere model

Procedia PDF Downloads 43
104 An Overview of the Islamic Banking Development in the United Kingdom, Malaysia, Saudi Arabia, Iran, Nigeria, Kenya and Uganda

Authors: Pradeep Kulshrestha, Maulana Ayoub Ali

Abstract:

The level of penetration of Islamic banking products and services has recorded a reasonable growth at an exponential rate in many parts of the world. There are many factors which have contributed to this growth including, but not limited to the rapid growth of number of Muslims who are uncomfortable with the conventional ways of banking, interest and higher interest rates scheduled by conventional banks and financial institutions as well as the financial inclusion campaign conducted in many countries. The system is facing legal challenges which open the research fdoor for practitioners and academicians for the sake of finding out solutions to those challenges. This paper tries to investigate the development of the Islamic banking system in the United Kingdom (UK), Saudi Arabia, Malaysia, Iran, Kenya, Nigeria and Uganda in order to understand the modalities which have been employed to run an Islamic banking system in the aforementioned countries. The methodology which has been employed in doing this research paper is Doctrinal, of which legislations, policies and other legal tools have been carefully studied and analysed. Again, papers from academic journals, books and financial reports have been deeply analysed for the purpose of enriching the paper and come up with a tangible results. The paper found that in Asia, Malaysia has created the smoothest legal platform for Islamic banking system to work properly in the country. The United Kingdom has tried harder to smooth the banking system without affecting the conventional banking methods and without favouring the operations of Islamic banks. It also tries harder to make UK as an Islamic banking and finance hub in Europe. The entire banking system in Iran is Islamic, while Nigeria has undergone several legal reforms to suit Islamic banking system in the country. Kenya and Uganda are at a different pace in making Islamic Banking system work alongside the conventional banking system.  

Keywords: shariah, Islamic banking, law, alternative banking

Procedia PDF Downloads 110
103 A Levelized Cost Analysis for Solar Energy Powered Sea Water Desalination in the Arabian Gulf Region

Authors: Abdullah Kaya, Muammer Koc

Abstract:

A levelized cost analysis of solar energy powered seawater desalination in The Emirate of Abu Dhabi is conducted to show that clean and renewable desalination is economically viable. The Emirate heavily relies on seawater desalination for its freshwater needs due to limited freshwater resources available. This trend is expected to increase further due to growing population and economic activity, rapid decline in limited freshwater reserves, and aggravating effects of climate change. Seawater desalination in Abu Dhabi is currently done through thermal desalination technologies such as multi-stage flash (MSF) and multi-effect distillation (MED) which are coupled with thermal power plants known as co-generation. Our analysis indicates that these thermal desalination methods are inefficient regarding energy consumption and harmful to the environment due to CO₂ emissions and other dangerous byproducts. Therefore, utilization of clean and renewable desalination options has become a must for The Emirate for the transition to a sustainable future. The rapid decline in the cost of solar PV system for energy production and RO technology for desalination makes the combination of these two an ideal option for a future of sustainable desalination in the Emirate of Abu Dhabi. A Levelized cost analysis for water produced by solar PV + RO system indicates that Abu Dhabi is well positioned to utilize this technological combination for cheap and clean desalination for the coming years. It has been shown that cap-ex cost of solar PV powered RO system has potential to go as low as to 101 million US $ (1111 $/m³) at best case considering the recent technological developments. The levelized cost of water (LCW) values fluctuate between 0.34 $/m³ for the baseline case and 0.27 $/m³ for the best case. Even the highly conservative case yields LCW cheaper than 100% from all thermal desalination methods currently employed in the Emirate. Exponential cost decreases in both solar PV and RO sectors along with increasing economic scale globally signal the fact that a cheap and clean desalination can be achieved by the combination of these technologies.

Keywords: solar PV, RO desalination, sustainable desalination, levelized cost of analysis, Emirate of Abu Dhabi

Procedia PDF Downloads 128
102 Zinc Oxide Nanorods Decorated Nanofibers Based Flexible Electrodes for Capacitive Energy Storage Applications

Authors: Syed Kamran Sami, Saqib Siddiqui

Abstract:

In recent times, flexible supercapacitors retaining high electrochemical performance and steadiness along with mechanical endurance has developed as a spring of attraction due to the exponential progress and innovations in energy storage devices. To meet the rampant increasing demand of energy storage device with the small form factor, a unique, low cost and high-performance supercapacitor with considerably higher capacitance and mechanical robustness is required to recognize their real-life applications. Here in this report, synthesis route of electrode materials with low rigidity and high charge storage performance is reported using 1D-1D hybrid structure of zinc oxide (ZnO) nanorods, and conductive polymer smeared polyvinylidene fluoride–trifluoroethylene (P(VDF–TrFE)) electrospun nanofibers. The ZnO nanorods were uniformly grown on poly (3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT: PSS) coated P(VDF-TrFE) nanofibers using hydrothermal growth to manufacture light weight, permeable electrodes for supercapacitor. The PEDOT: PSS coated P(VDF-TrFE) porous web of nanofibers act as framework with high surface area. The incorporation of ZnO nanorods further boost the specific capacitance by 59%. The symmetric device using the fabricated 1D-1D hybrid electrodes reveals fairly high areal capacitance of 1.22mF/cm² at a current density of 0.1 mA/cm² with a power density of more than 1600 W/Kg. Moreover, the fabricated electrodes show exceptional flexibility and high endurance with 90% and 76% specific capacitance retention after 1000 and 5000 cycles respectively signifying the astonishing mechanical durability and long-term stability. All the properties exhibited by the fabricated electrode make it convenient for making flexible energy storage devices with the low form factor.

Keywords: ZnO nanorods, electrospinning, mechanical endurance, flexible supercapacitor

Procedia PDF Downloads 245
101 A Framework of Virtualized Software Controller for Smart Manufacturing

Authors: Pin Xiu Chen, Shang Liang Chen

Abstract:

A virtualized software controller is developed in this research to replace traditional hardware control units. This virtualized software controller transfers motion interpolation calculations from the motion control units of end devices to edge computing platforms, thereby reducing the end devices' computational load and hardware requirements and making maintenance and updates easier. The study also applies the concept of microservices, dividing the control system into several small functional modules and then deploy into a cloud data server. This reduces the interdependency among modules and enhances the overall system's flexibility and scalability. Finally, with containerization technology, the system can be deployed and started in a matter of seconds, which is more efficient than traditional virtual machine deployment methods. Furthermore, this virtualized software controller communicates with end control devices via wireless networks, making the placement of production equipment or the redesign of processes more flexible and no longer limited by physical wiring. To handle the large data flow and maintain low-latency transmission, this study integrates 5G technology, fully utilizing its high speed, wide bandwidth, and low latency features to achieve rapid and stable remote machine control. An experimental setup is designed to verify the feasibility and test the performance of this framework. This study designs a smart manufacturing site with a 5G communication architecture, serving as a field for experimental data collection and performance testing. The smart manufacturing site includes one robotic arm, three Computer Numerical Control machine tools, several Input/Output ports, and an edge computing architecture. All machinery information is uploaded to edge computing servers and cloud servers via 5G communication and the Internet of Things framework. After analysis and computation, this information is converted into motion control commands, which are transmitted back to the relevant machinery for motion control through 5G communication. The communication time intervals at each stage are calculated using the C++ chrono library to measure the time difference for each command transmission. The relevant test results will be organized and displayed in the full-text.

Keywords: 5G, MEC, microservices, virtualized software controller, smart manufacturing

Procedia PDF Downloads 24
100 Attracting the North Holidaymaker to Ireland Using Social Media Channels: An Irish Marketing Strategy

Authors: Colm Barcoe, Garvan Whelan

Abstract:

In tourism, engagement has been found to boost awareness of a destination and subsequently increase visits. Customer engagement in this industry is now facilitated by social media. This phenomenon is not very well researched in relation to Ireland and the North American tourism market. The objective of this paper is to present research findings on two related topics; the first is an investigation into the effectiveness of social media channels as components of a digital marketing campaign when promoting Ireland as a brand in North America. Secondly, this study reveals how Irish marketers have embraced social media platforms and channels with an innovative strategy that has successfully attracted growing numbers of US and Canadian holidaymakers to Ireland. A range of methodological approaches was applied in order to achieve the study’s objective. The methods used were both quantitative and qualitative, and the data was obtained from both Irish marketers and North American holidaymakers. Surveys of these holidaymakers in the pre, during and post-trip phases revealed their attitudes towards social media and Ireland as a destination. Semi-structured interviews with those responsible for implementing relationship marketing strategies for this segment provide insight into the effectiveness of social media when used to capitalise on the cultural link between Ireland and North America. Further analysis involved using Nvivo 11+ software to investigate the activities of the Irish destination marketer (DMO) and the engagement of the US and Canadian audiences through a detailed study of social media platform content. The findings from this investigation will extend an under-researched body of literature pertaining to Ireland as a destination and the successful digital marketing campaigns that have achieved exponential growth in this sector over the past five years. The empirical evidence presented also illustrates how the innovative use of social media has assisted the DMO to engage with the North American holidaymaker as part of an effective digital marketing strategy.

Keywords: channels, digital, engagement, marketing, strategies

Procedia PDF Downloads 119
99 Spatial Mapping of Variations in Groundwater of Taluka Islamkot Thar Using GIS and Field Data

Authors: Imran Aziz Tunio

Abstract:

Islamkot is an underdeveloped sub-district (Taluka) in the Tharparkar district Sindh province of Pakistan located between latitude 24°25'19.79"N to 24°47'59.92"N and longitude 70° 1'13.95"E to 70°32'15.11"E. The Islamkot has an arid desert climate and the region is generally devoid of perennial rivers, canals, and streams. It is highly dependent on rainfall which is not considered a reliable surface water source and groundwater is the only key source of water for many centuries. To assess groundwater’s potential, an electrical resistivity survey (ERS) was conducted in Islamkot Taluka. Groundwater investigations for 128 Vertical Electrical Sounding (VES) were collected to determine the groundwater potential and obtain qualitatively and quantitatively layered resistivity parameters. The PASI Model 16 GL-N Resistivity Meter was used by employing a Schlumberger electrode configuration, with half current electrode spacing (AB/2) ranging from 1.5 to 100 m and the potential electrode spacing (MN/2) from 0.5 to 10 m. The data was acquired with a maximum current electrode spacing of 200 m. The data processing for the delineation of dune sand aquifers involved the technique of data inversion, and the interpretation of the inversion results was aided by the use of forward modeling. The measured geo-electrical parameters were examined by Interpex IX1D software, and apparent resistivity curves and synthetic model layered parameters were mapped in the ArcGIS environment using the inverse Distance Weighting (IDW) interpolation technique. Qualitative interpretation of vertical electrical sounding (VES) data shows the number of geo-electrical layers in the area varies from three to four with different resistivity values detected. Out of 128 VES model curves, 42 nos. are 3 layered, and 86 nos. are 4 layered. The resistivity of the first subsurface layers (Loose surface sand) varied from 16.13 Ωm to 3353.3 Ωm and thickness varied from 0.046 m to 17.52m. The resistivity of the second subsurface layer (Semi-consolidated sand) varied from 1.10 Ωm to 7442.8 Ωm and thickness varied from 0.30 m to 56.27 m. The resistivity of the third subsurface layer (Consolidated sand) varied from 0.00001 Ωm to 3190.8 Ωm and thickness varied from 3.26 m to 86.66 m. The resistivity of the fourth subsurface layer (Silt and Clay) varied from 0.0013 Ωm to 16264 Ωm and thickness varied from 13.50 m to 87.68 m. The Dar Zarrouk parameters, i.e. longitudinal unit conductance S is from 0.00024 to 19.91 mho; transverse unit resistance T from 7.34 to 40080.63 Ωm2; longitudinal resistance RS is from 1.22 to 3137.10 Ωm and transverse resistivity RT from 5.84 to 3138.54 Ωm. ERS data and Dar Zarrouk parameters were mapped which revealed that the study area has groundwater potential in the subsurface.

Keywords: electrical resistivity survey, GIS & RS, groundwater potential, environmental assessment, VES

Procedia PDF Downloads 57
98 Synthesis and Characterization of Anti-Psychotic Drugs Based DNA Aptamers

Authors: Shringika Soni, Utkarsh Jain, Nidhi Chauhan

Abstract:

Aptamers are recently discovered ~80-100 bp long artificial oligonucleotides that not only demonstrated their applications in therapeutics; it is tremendously used in diagnostic and sensing application to detect different biomarkers and drugs. Synthesizing aptamers for proteins or genomic template is comparatively feasible in laboratory, but drugs or other chemical target based aptamers require major specification and proper optimization and validation. One has to optimize all selection, amplification, and characterization steps of the end product, which is extremely time-consuming. Therefore, we performed asymmetric PCR (polymerase chain reaction) for random oligonucleotides pool synthesis, and further use them in Systematic evolution of ligands by exponential enrichment (SELEX) for anti-psychotic drugs based aptamers synthesis. Anti-psychotic drugs are major tranquilizers to control psychosis for proper cognitive functions. Though their low medical use, their misuse may lead to severe medical condition as addiction and can promote crime in social and economical impact. In this work, we have approached the in-vitro SELEX method for ssDNA synthesis for anti-psychotic drugs (in this case ‘target’) based aptamer synthesis. The study was performed in three stages, where first stage included synthesis of random oligonucleotides pool via asymmetric PCR where end product was analyzed with electrophoresis and purified for further stages. The purified oligonucleotide pool was incubated in SELEX buffer, and further partition was performed in the next stage to obtain target specific aptamers. The isolated oligonucleotides are characterized and quantified after each round of partition, and significant results were obtained. After the repetitive partition and amplification steps of target-specific oligonucleotides, final stage included sequencing of end product. We can confirm the specific sequence for anti-psychoactive drugs, which will be further used in diagnostic application in clinical and forensic set-up.

Keywords: anti-psychotic drugs, aptamer, biosensor, ssDNA, SELEX

Procedia PDF Downloads 100
97 Control for Fluid Flow Behaviours of Viscous Fluids and Heat Transfer in Mini-Channel: A Case Study Using Numerical Simulation Method

Authors: Emmanuel Ophel Gilbert, Williams Speret

Abstract:

The control for fluid flow behaviours of viscous fluids and heat transfer occurrences within heated mini-channel is considered. Heat transfer and flow characteristics of different viscous liquids, such as engine oil, automatic transmission fluid, one-half ethylene glycol, and deionized water were numerically analyzed. Some mathematical applications such as Fourier series and Laplace Z-Transforms were employed to ascertain the behaviour-wave like structure of these each viscous fluids. The steady, laminar flow and heat transfer equations are reckoned by the aid of numerical simulation technique. Further, this numerical simulation technique is endorsed by using the accessible practical values in comparison with the anticipated local thermal resistances. However, the roughness of this mini-channel that is one of the physical limitations was also predicted in this study. This affects the frictional factor. When an additive such as tetracycline was introduced in the fluid, the heat input was lowered, and this caused pro rata effect on the minor and major frictional losses, mostly at a very minute Reynolds number circa 60-80. At this ascertained lower value of Reynolds numbers, there exists decrease in the viscosity and minute frictional losses as a result of the temperature of these viscous liquids been increased. It is inferred that the three equations and models are identified which supported the numerical simulation via interpolation and integration of the variables extended to the walls of the mini-channel, yields the utmost reliance for engineering and technology calculations for turbulence impacting jets in the near imminent age. Out of reasoning with a true equation that could support this control for the fluid flow, Navier-stokes equations were found to tangential to this finding. Though, other physical factors with respect to these Navier-stokes equations are required to be checkmated to avoid uncertain turbulence of the fluid flow. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme via numerical simulation method that takes into account certain terms in the full Navier-Stokes equations. However, this resulted in dropping out in the approximation of certain assumptions. Concrete questions raised in the main body of the work are sightseen further in the appendices.

Keywords: frictional losses, heat transfer, laminar flow, mini-channel, number simulation, Reynolds number, turbulence, viscous fluids

Procedia PDF Downloads 141
96 Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome

Authors: Iacopo Testi, Diego Pajarito, Nicoletta Roberto, Carmen Greco

Abstract:

Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.

Keywords: data science, spatial analysis, composite index, Rome, urban planning, youth economic discomfort index

Procedia PDF Downloads 100
95 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability

Procedia PDF Downloads 182
94 Nonstationary Modeling of Extreme Precipitation in the Wei River Basin, China

Authors: Yiyuan Tao

Abstract:

Under the impact of global warming together with the intensification of human activities, the hydrological regimes may be altered, and the traditional stationary assumption was no longer satisfied. However, most of the current design standards of water infrastructures were still based on the hypothesis of stationarity, which may inevitably result in severe biases. Many critical impacts of climate on ecosystems, society, and the economy are controlled by extreme events rather than mean values. Therefore, it is of great significance to identify the non-stationarity of precipitation extremes and model the precipitation extremes in a nonstationary framework. The Wei River Basin (WRB), located in a continental monsoon climate zone in China, is selected as a case study in this study. Six extreme precipitation indices were employed to investigate the changing patterns and stationarity of precipitation extremes in the WRB. To identify if precipitation extremes are stationary, the Mann-Kendall trend test and the Pettitt test, which is used to examine the occurrence of abrupt changes are adopted in this study. Extreme precipitation indices series are fitted with non-stationary distributions that selected from six widely used distribution functions: Gumbel, lognormal, Weibull, gamma, generalized gamma and exponential distributions by means of the time-varying moments model generalized additive models for location, scale and shape (GAMLSS), where the distribution parameters are defined as a function of time. The results indicate that: (1) the trends were not significant for the whole WRB, but significant positive/negative trends were still observed in some stations, abrupt changes for consecutive wet days (CWD) mainly occurred in 1985, and the assumption of stationarity is invalid for some stations; (2) for these nonstationary extreme precipitation indices series with significant positive/negative trends, the GAMLSS models are able to capture well the temporal variations of the indices, and perform better than the stationary model. Finally, the differences between the quantiles of nonstationary and stationary models are analyzed, which highlight the importance of nonstationary modeling of precipitation extremes in the WRB.

Keywords: extreme precipitation, GAMLSSS, non-stationary, Wei River Basin

Procedia PDF Downloads 90
93 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 67
92 Bacteriophage Lysis Of Physiologically Stressed Listeria Monocytogenes In A Simulated Seafood Processing Environment

Authors: Geevika J. Ganegama Arachchi, Steve H. Flint, Lynn McIntyre, Cristina D. Cruz, Beatrice M. Dias-Wanigasekera, Craig Billington, J. Andrew Hudson, Anthony N. Mutukumira

Abstract:

In seafood processing plants, Listeriamonocytogenes(L. monocytogenes)likely exists in a metabolically stressed state due to the nutrient-deficient environment, processing treatments such as heating, curing, drying, and freezing, and exposure to detergents and disinfectants. Stressed L. monocytogenes cells have been shown to be as pathogenic as unstressed cells. This study investigated lytic efficacy of (LiMN4L, LiMN4p, and LiMN17) which were previouslycharacterized as virulent against physiologically stressed cells of three seafood borne L. monocytogenesstrains (19CO9, 19DO3, and 19EO3).Physiologically compromised cells ofL. monocytogenesstrains were prepared by aging cultures in TrypticaseSoy Broth at 15±1°C for 72 h; heat injuringcultures at 54±1 - 55±1°C for 40 - 60 min;salt-stressing cultures in Milli-Q water were incubated at 25±1°C in darkness for three weeks; and incubating cultures in 9% (w/v) NaCl at 15±1°C for 72 h. Low concentrations of physiologically compromised cells of three L. monocytogenesstrainswere challenged in vitrowith high titre of three phages in separate experiments using Fish Broth medium (aqueous fish extract) at 15 °C in order to mimic the environment of seafood processing plant. Each phage, when present at ≈9 log10 PFU/ml, reduced late exponential phase cells of L. monocytogenes suspended in fish protein broth at ≈2-3 log10 CFU/ml to a non-detectable level (< 10 CFU/ml). Each phage, when present at ≈8.5 log10 PFU/ml, reduced both heat-injured cells present at 2.5-3.6 log10 CFU/ml and starved cells that were showed coccoid shape, present at ≈2-3 log10 CFU/ml to < 10 CFU/ml after 30 min. Phages also reduced salt-stressed cellspresent at ≈3 log10 CFU/ml by > 2 log10. L. monocytogenes (≈8 log10 CFU/ml) were reduced to below the detection limit (1 CFU/ml) by the three successive phage infections over 16 h, indicating that emergence of spontaneous phage resistance was infrequent. The three virulent phages showed high decontamination potential for physiologically stressed L. monocytogenes strains from seafood processing environments.

Keywords: physiologically stressed L. monocytogenes, heat injured, seafood processing environment, virulent phage

Procedia PDF Downloads 104
91 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer

Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu

Abstract:

Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.

Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature

Procedia PDF Downloads 167
90 Photovoltaic Solar Energy in Public Buildings: A Showcase for Society

Authors: Eliane Ferreira da Silva

Abstract:

This paper aims to mobilize and sensitize public administration leaders to good practices and encourage investment in the PV system in Brazil. It presents a case study methodology for dimensioning the PV system in the roofs of the public buildings of the Esplanade of the Ministries, Brasilia, capital of the country, with predefined resources, starting with the Sustainable Esplanade Project (SEP), of the exponential growth of photovoltaic solar energy in the world and making a comparison with the solar power plant of the Ministry of Mines and Energy (MME), active since: 6/10/2016. In order to do so, it was necessary to evaluate the energy efficiency of the buildings in the period from January 2016 to April 2017, (16 months) identifying the opportunities to reduce electric energy expenses, through the adjustment of contracted demand, the tariff framework and correction of existing active energy. The instrument used to collect data on electric bills was the e-SIC citizen information system. The study considered in addition to the technical and operational aspects, the historical, cultural, architectural and climatic aspects, involved by several actors. Identifying the reductions of expenses, the study directed to the following aspects: Case 1) economic feasibility for exchanges of common lamps, for LED lamps, and, Case 2) economic feasibility for the implementation of photovoltaic solar system connected to the grid. For the case 2, PV*SOL Premium Software was used to simulate several possibilities of photovoltaic panels, analyzing the best performance, according to local characteristics, such as solar orientation, latitude, annual average solar radiation. A simulation of an ideal photovoltaic solar system was made, with due calculations of its yield, to provide a compensation of the energy expenditure of the building - or part of it - through the use of the alternative source in question. The study develops a methodology for public administration, as a major consumer of electricity, to act in a responsible, fiscalizing and incentive way in reducing energy waste, and consequently reducing greenhouse gases.

Keywords: energy efficiency, esplanade of ministries, photovoltaic solar energy, public buildings, sustainable building

Procedia PDF Downloads 99
89 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton

Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani

Abstract:

Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.

Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton

Procedia PDF Downloads 293
88 Management of Acute Appendicitis with Preference on Delayed Primary Suturing of Surgical Incision

Authors: N. A. D. P. Niwunhella, W. G. R. C. K. Sirisena

Abstract:

Appendicitis is one of the most encountered abdominal emergencies worldwide. Proper clinical diagnosis and appendicectomy with minimal post operative complications are therefore priorities. Aim of this study was to ascertain the overall management of acute appendicitis in Sri Lanka in special preference to delayed primary suturing of the surgical site, comparing other local and international treatment outcomes. Data were collected prospectively from 155 patients who underwent appendicectomy following clinical and radiological diagnosis with ultrasonography. Histological assessment was done for all the specimens. All perforated appendices were managed with delayed primary closure. Patients were followed up for 28 days to assess complications. Mean age of patient presentation was 27 years; mean pre-operative waiting time following admission was 24 hours; average hospital stay was 72 hours; accuracy of clinical diagnosis of appendicitis as confirmed by histology was 87.1%; post operative wound infection rate was 8.3%, and among them 5% had perforated appendices; 4 patients had post operative complications managed without re-opening. There was no fistula formation or mortality reported. Current study was compared with previously published data: a comparison on management of acute appendicitis in Sri Lanka vs. United Kingdom (UK). The diagnosis of current study was equally accurate, but post operative complications were significantly reduced - (current study-9.6%, compared Sri Lankan study-16.4%; compared UK study-14.1%). During the recent years, there has been an exponential rise in the use of Computerised Tomography (CT) imaging in the assessment of patients with acute appendicitis. Even though, the diagnostic accuracy without using CT, and treatment outcome of acute appendicitis in this study match other local studies as well as with data compared to UK. Therefore CT usage has not increased the diagnostic accuracy of acute appendicitis significantly. Especially, delayed primary closure may have reduced post operative wound infection rate for ruptured appendices, therefore suggest this approach for further evaluation as a safer and an effective practice in other hospitals worldwide as well.

Keywords: acute appendicitis, computerised tomography, diagnostic accuracy, delayed primary closure

Procedia PDF Downloads 114
87 Storms Dynamics in the Black Sea in the Context of the Climate Changes

Authors: Eugen Rusu

Abstract:

The objective of the work proposed is to perform an analysis of the wave conditions in the Black Sea basin. This is especially focused on the spatial and temporal occurrences and on the dynamics of the most extreme storms in the context of the climate changes. A numerical modelling system, based on the spectral phase averaged wave model SWAN, has been implemented and validated against both in situ measurements and remotely sensed data, all along the sea. Moreover, a successive correction method for the assimilation of the satellite data has been associated with the wave modelling system. This is based on the optimal interpolation of the satellite data. Previous studies show that the process of data assimilation improves considerably the reliability of the results provided by the modelling system. This especially concerns the most sensitive cases from the point of view of the accuracy of the wave predictions, as the extreme storm situations are. Following this numerical approach, it has to be highlighted that the results provided by the wave modelling system above described are in general in line with those provided by some similar wave prediction systems implemented in enclosed or semi-enclosed sea basins. Simulations of this wave modelling system with data assimilation have been performed for the 30-year period 1987-2016. Considering this database, the next step was to analyze the intensity and the dynamics of the higher storms encountered in this period. According to the data resulted from the model simulations, the western side of the sea is considerably more energetic than the rest of the basin. In this western region, regular strong storms provide usually significant wave heights greater than 8m. This may lead to maximum wave heights even greater than 15m. Such regular strong storms may occur several times in one year, usually in the wintertime, or in late autumn, and it can be noticed that their frequency becomes higher in the last decade. As regards the case of the most extreme storms, significant wave heights greater than 10m and maximum wave heights close to 20m (and even greater) may occur. Such extreme storms, which in the past were noticed only once in four or five years, are more recent to be faced almost every year in the Black Sea, and this seems to be a consequence of the climate changes. The analysis performed included also the dynamics of the monthly and annual significant wave height maxima as well as the identification of the most probable spatial and temporal occurrences of the extreme storm events. Finally, it can be concluded that the present work provides valuable information related to the characteristics of the storm conditions and on their dynamics in the Black Sea. This environment is currently subjected to high navigation traffic and intense offshore and nearshore activities and the strong storms that systematically occur may produce accidents with very serious consequences.

Keywords: Black Sea, extreme storms, SWAN simulations, waves

Procedia PDF Downloads 205
86 Optimising Post-Process Heat Treatments of Selective Laser Melting-Produced Ti-6Al-4V Parts to Achieve Superior Mechanical Properties

Authors: Gerrit Ter Haar, Thorsten Becker, Deborah Blaine

Abstract:

The Additive Manufacturing (AM) process of Selective Laser Melting (SLM) has seen an exponential growth in sales and development in the past fifteen years. Whereas the capability of SLM was initially limited to rapid prototyping, progress in research and development (R&D) has allowed SLM to be capable of fully functional parts. This technology is still at a primitive stage and technical knowledge of the vast number of variables influencing final part quality is limited. Ongoing research and development of the sensitive printing process and post processes is of utmost importance in order to qualify SLM parts to meet international standards. Quality concerns in Ti-6Al-4V manufactured through SLM has been identified, which include: high residual stresses, part porosity, low ductility and anisotropic mechanical properties. Whereas significant quality improvements have been made through optimising printing parameters, research indicates as-produced part ductility to be a major limiting factor when compared to its wrought counterpart. This study aims at achieving an in-depth understanding of the underlining links between SLM produced Ti-6Al-4V microstructure and its mechanical properties. Knowledge of microstructural transformation kinetics of Ti-6Al-4V allows for the optimisation of post-process heat treatments thereby achieving the required process route to manufacture high quality SLM produced Ti-6Al-4V parts. Experimental methods used to evaluate the kinematics of microstructural transformation of SLM Ti-6Al-4V are: optical microscopy and electron backscatter diffraction. Results show that a low-temperature heat treatment is capable of transforming the as-produced, martensitic microstructure into a duel-phase microstructure exhibiting both a high strength and improved ductility. Furthermore, isotropy of mechanical properties can be achieved through certain annealing routes. Mechanical properties identical to that of wrought Ti-6Al-4V can, therefore, be achieved through an optimised process route.

Keywords: EBSD analysis, heat treatments, microstructural characterisation, selective laser melting, tensile behaviour, Ti-6Al-4V

Procedia PDF Downloads 380