Search results for: solution processed
4812 A West Coast Estuarine Case Study: A Predictive Approach to Monitor Estuarine Eutrophication
Authors: Vedant Janapaty
Abstract:
Estuaries are wetlands where fresh water from streams mixes with salt water from the sea. Also known as “kidneys of our planet”- they are extremely productive environments that filter pollutants, absorb floods from sea level rise, and shelter a unique ecosystem. However, eutrophication and loss of native species are ailing our wetlands. There is a lack of uniform data collection and sparse research on correlations between satellite data and in situ measurements. Remote sensing (RS) has shown great promise in environmental monitoring. This project attempts to use satellite data and correlate metrics with in situ observations collected at five estuaries. Images for satellite data were processed to calculate 7 bands (SIs) using Python. Average SI values were calculated per month for 23 years. Publicly available data from 6 sites at ELK was used to obtain 10 parameters (OPs). Average OP values were calculated per month for 23 years. Linear correlations between the 7 SIs and 10 OPs were made and found to be inadequate (correlation = 1 to 64%). Fourier transform analysis on 7 SIs was performed. Dominant frequencies and amplitudes were extracted for 7 SIs, and a machine learning(ML) model was trained, validated, and tested for 10 OPs. Better correlations were observed between SIs and OPs, with certain time delays (0, 3, 4, 6 month delay), and ML was again performed. The OPs saw improved R² values in the range of 0.2 to 0.93. This approach can be used to get periodic analyses of overall wetland health with satellite indices. It proves that remote sensing can be used to develop correlations with critical parameters that measure eutrophication in situ data and can be used by practitioners to easily monitor wetland health.Keywords: estuary, remote sensing, machine learning, Fourier transform
Procedia PDF Downloads 1044811 The Antioxidant and Antinociceptive Effects of Curcumin in Experimentally Induced Pain in Rats
Authors: Valeriu Mihai But, Sorana Daniela Bolboacă, Adriana Elena Bulboacă
Abstract:
The nutraceutical compound Curcumin (Curcuma longa L.) is known for its anti-inflammatory, anti-cancer, and antioxidant effects. This study aimed to evaluate the antioxidative and analgesic effects of Curcumin (CC) compared to Tramadol (T) in chemical-induced nociceptive pain in rats. Thirty-five rats were randomly divided into five groups of seven rats each and were treated as follows: C group (control group): treated with saline solution 0.9%, (1 ml, i.p. administration), ethanoic acid (EA) group: pretreated with saline solution 0.9% - 30 min before EA nociceptive pain induction, (1 ml, i.p. administration), T group: pretreated with Tramadol, 10 mg/kg body weight (bw), i.p. administration - 30 min before EA nociceptive pain induction, CC1-group: pretreated with 1 mg/100g bw Curcumin i.p. administration - 2 days before EA pain induction and CC2-group: pretreated with Curcumin 2 mg/100g bw i.p. administration - 2 days before EA nociceptive pain induction. The following oxidative stress parameters were assessed: malondialdehyde (MDA), nitric oxide (NOx), total oxidative status (TOS), total antioxidative capacity (TAC), and thiol (Th). The antalgic activity was measured by the ethanoic acid writhing test. Treatment with Curcumin, both 1 mg/100g bw, and 2 mg/100g bw, showed significant differences as compared with the control group (p<0.001) regarding malondialdehyde (MDA), nitric oxide (NOx), and total oxidative status (TOS) oxidative biomarkers. Pretreatment with 2 mg/100g bw of Curcumin presented a significant decrease in MDA values compared with Tramadol (p<0.001). The TAC significantly increased in pretreatment with Curcumin compared with group control. (p<0.001) The nociceptive response to EA was significantly reduced in Curcumin and Tramadol groups. Treatment with Curcumin at a higher concentration was more effective. In an experimental pain model, this study demonstrates an important antioxidant and antinociceptive activity of Curcumin comparable with Tramadol treatment.Keywords: curcumin, nociception, oxidative stress, pain
Procedia PDF Downloads 1084810 The Application of Lesson Study Model in Writing Review Text in Junior High School
Authors: Sulastriningsih Djumingin
Abstract:
This study has some objectives. It aims at describing the ability of the second-grade students to write review text without applying the Lesson Study model at SMPN 18 Makassar. Second, it seeks to describe the ability of the second-grade students to write review text by applying the Lesson Study model at SMPN 18 Makassar. Third, it aims at testing the effectiveness of the Lesson Study model in writing review text at SMPN 18 Makassar. This research was true experimental design with posttest Only group design involving two groups consisting of one class of the control group and one class of the experimental group. The research populations were all the second-grade students at SMPN 18 Makassar amounted to 250 students consisting of 8 classes. The sampling technique was purposive sampling technique. The control class was VIII2 consisting of 30 students, while the experimental class was VIII8 consisting of 30 students. The research instruments were in the form of observation and tests. The collected data were analyzed using descriptive statistical techniques and inferential statistical techniques with t-test types processed using SPSS 21 for windows. The results shows that: (1) of 30 students in control class, there are only 14 (47%) students who get the score more than 7.5, categorized as inadequate; (2) in the experimental class, there are 26 (87%) students who obtain the score of 7.5, categorized as adequate; (3) the Lesson Study models is effective to be applied in writing review text. Based on the comparison of the ability of the control class and experimental class, it indicates that the value of t-count is greater than the value of t-table (2.411> 1.667). It means that the alternative hypothesis (H1) proposed by the researcher is accepted.Keywords: application, lesson study, review text, writing
Procedia PDF Downloads 2024809 Nasopharyngeal Carriage of Streptococcus pneumoniae in Children under 5 Years of Age before Introduction of Pneumococcal Vaccine (PCV 10) in Urban and Rural Sindh
Authors: Muhammad Imran Nisar, Fyezah Jehan, Tauseef Akhund, Sadia Shakoor, Kanwal Nayani, Furqan Kabir, Asad Ali, Anita Zaidi
Abstract:
Pneumococcal Vaccine -10 (PCV 10) was included in the Expanded Program of immunization (EPI) in Sindh, Pakistan in February 2013. This study was carried out immediately before the introduction of PCV 10 to establish baseline pneumococcal carriage and prevalent serotypes in naso-pharynx of children 3-11 months of age in an urban and rural community in Sindh, Pakistan. An additional sample of children aged 12 to 59 months was drawn from the urban community. Nasopharyngeal specimens were collected from a random sample of children. Samples were processed in a central laboratory in Karachi. Pneumococci were cultured on 5% Sheep Blood Agar and serotyping was performed using CDC standardized sequential multiplex PCR assay on bacterial colonies. Serotypes were then categorized into vaccine (PCV-10 and PCV-13) type and non-vaccine types. A total of 670 children were enrolled. Carriage rate for pneumococcus based on culture positivity was 74% and 79.5 % in the infant group in Karachi and Matiari respectively. Carriage rate was 78.2% for children aged 12 to 59 months in Karachi. Proportion of PCV 10 serotypes in infants was 38.8% and 33.5% in Karachi and Matiari respectively. In the older age group in Karachi, the proportion was 30.6%. Most common serotypes were 6A, 6B, 23F, 19A and 18C. This survey establishes vaccine and non-vaccine serotype carriage rate in a vaccine-naïve pediatric population among rural and urban communities in Sindh province. Annually planned surveys in the same communities will inform change in carriage rate after the introduction and uptake of PCV 10 in these communities.Keywords: Naso-Pharyngeal carriage, Pakistan, PCV10, Pneumococcus
Procedia PDF Downloads 3004808 Analysis on the Importance and Direction of Change in Residential Environment of Apartment with the Change of Population Structure
Authors: Jo, Eui Chang, Shin, Heekang, Mun, A. Young , Kim, Hong Kyu
Abstract:
Regarding change on population and family structure in Korea after the 1980s, there has been a rapid change of low fertility, graying and increase of single household that cannot be found in any other parts of the world. With the result of total population residence by the National Statistical Office, Korea will hold 52,160,065 people in 2030 and reduction is predicted and from 2025 people above the age of 65 will take 20% of the total population, which means the entry of a super aging society. Also, average number in a family will be 2.71 in 2015 and decrease to 2.33 in 2035. On the other hand, proportion of single and two person household will be 53.7% in 2015 and it will increase up to 68.4% in 2035. Old population will increase greatly, single and two person household will take 2/3 of the total households. Delphi research was processed in 3 steps on 40 professionals about the importance and changing factors of residential environment of apartment followed by the change of population structure. For interior plan, space variety, variability, safety, convenient installation, eco-friendly installation, and IT installation were important factors for construction plan, plan on aged and single households, convenient installation, safety installation, eco-friendly installation for subdivision plan, education/child care facility, parks/gymnasium facility, community facility, and accessibility of transportation were predicted as important factors.Keywords: change of population structure, super-graying, change of residential environment of apartment, single household, interior plan, construction plan, subdivision plan, Delphi research
Procedia PDF Downloads 4374807 Compass Bar: A Visualization Technique for Out-of-View-Objects in Head-Mounted Displays
Authors: Alessandro Evangelista, Vito M. Manghisi, Michele Gattullo, Enricoandrea Laviola
Abstract:
In this work, we propose a custom visualization technique for Out-Of-View-Objects in Virtual and Augmented Reality applications using Head Mounted Displays. In the last two decades, Augmented Reality (AR) and Virtual Reality (VR) technologies experienced a remarkable growth of applications for navigation, interaction, and collaboration in different types of environments, real or virtual. Both environments can be potentially very complex, as they can include many virtual objects located in different places. Given the natural limitation of the human Field of View (about 210° horizontal and 150° vertical), humans cannot perceive objects outside this angular range. Moreover, despite recent technological advances in AR e VR Head-Mounted Displays (HMDs), these devices still suffer from a limited Field of View, especially regarding Optical See-Through displays, thus greatly amplifying the challenge of visualizing out-of-view objects. This problem is not negligible when the user needs to be aware of the number and the position of the out-of-view objects in the environment. For instance, during a maintenance operation on a construction site where virtual objects serve to improve the dangers' awareness. Providing such information can enhance the comprehension of the scene, enable fast navigation and focused search, and improve users' safety. In our research, we investigated how to represent out-of-view-objects in HMD User Interfaces (UI). Inspired by commercial video games such as Call of Duty Modern Warfare, we designed a customized Compass. By exploiting the Unity 3D graphics engine, we implemented our custom solution that can be used both in AR and VR environments. The Compass Bar consists of a graduated bar (in degrees) at the top center of the UI. The values of the bar range from -180 (far left) to +180 (far right), the zero is placed in front of the user. Two vertical lines on the bar show the amplitude of the user's field of view. Every virtual object within the scene is represented onto the compass bar as a specific color-coded proxy icon (a circular ring with a colored dot at its center). To provide the user with information about the distance, we implemented a specific algorithm that increases the size of the inner dot as the user approaches the virtual object (i.e., when the user reaches the object, the dot fills the ring). This visualization technique for out-of-view objects has some advantages. It allows users to be quickly aware of the number and the position of the virtual objects in the environment. For instance, if the compass bar displays the proxy icon at about +90, users will immediately know that the virtual object is to their right and so on. Furthermore, by having qualitative information about the distance, users can optimize their speed, thus gaining effectiveness in their work. Given the small size and position of the Compass Bar, our solution also helps lessening the occlusion problem thus increasing user acceptance and engagement. As soon as the lockdown measures will allow, we will carry out user-tests comparing this solution with other state-of-the-art existing ones such as 3D Radar, SidebARs and EyeSee360.Keywords: augmented reality, situation awareness, virtual reality, visualization design
Procedia PDF Downloads 1274806 A Case Study on Performance of Isolated Bridges under Near-Fault Ground Motion
Authors: Daniele Losanno, H. A. Hadad, Giorgio Serino
Abstract:
This paper presents a numerical investigation on the seismic performance of a benchmark bridge with different optimal isolation systems under near fault ground motion. Usually, very large displacements make seismic isolation an unfeasible solution due to boundary conditions, especially in case of existing bridges or high risk seismic regions. Hence, near-fault ground motions are most likely to affect either structures with long natural period range like isolated structures or structures sensitive to velocity content such as viscously damped structures. The work is aimed at analyzing the seismic performance of a three-span continuous bridge designed with different isolation systems having different levels of damping. The case study was analyzed in different configurations including: (a) simply supported, (b) isolated with lead rubber bearings (LRBs), (c) isolated with rubber isolators and 10% classical damping (HDLRBs), and (d) isolated with rubber isolators and 70% supplemental damping ratio. Case (d) represents an alternative control strategy that combines the effect of seismic isolation with additional supplemental damping trying to take advantages from both solutions. The bridge is modeled in SAP2000 and solved by time history direct-integration analyses under a set of six recorded near-fault ground motions. In addition to this, a set of analysis under Italian code provided seismic action is also conducted, in order to evaluate the effectiveness of the suggested optimal control strategies under far field seismic action. Results of the analysis demonstrated that an isolated bridge equipped with HDLRBs and a total equivalent damping ratio of 70% represents a very effective design solution for both mitigation of displacement demand at the isolation level and base shear reduction in the piers also in case of near fault ground motion.Keywords: isolated bridges, near-fault motion, seismic response, supplemental damping, optimal design
Procedia PDF Downloads 2864805 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights
Authors: Julian Wise
Abstract:
Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.Keywords: mineral technology, big data, machine learning operations, data lake
Procedia PDF Downloads 1124804 An IoT-Enabled Crop Recommendation System Utilizing Message Queuing Telemetry Transport (MQTT) for Efficient Data Transmission to AI/ML Models
Authors: Prashansa Singh, Rohit Bajaj, Manjot Kaur
Abstract:
In the modern agricultural landscape, precision farming has emerged as a pivotal strategy for enhancing crop yield and optimizing resource utilization. This paper introduces an innovative Crop Recommendation System (CRS) that leverages the Internet of Things (IoT) technology and the Message Queuing Telemetry Transport (MQTT) protocol to collect critical environmental and soil data via sensors deployed across agricultural fields. The system is designed to address the challenges of real-time data acquisition, efficient data transmission, and dynamic crop recommendation through the application of advanced Artificial Intelligence (AI) and Machine Learning (ML) models. The CRS architecture encompasses a network of sensors that continuously monitor environmental parameters such as temperature, humidity, soil moisture, and nutrient levels. This sensor data is then transmitted to a central MQTT server, ensuring reliable and low-latency communication even in bandwidth-constrained scenarios typical of rural agricultural settings. Upon reaching the server, the data is processed and analyzed by AI/ML models trained to correlate specific environmental conditions with optimal crop choices and cultivation practices. These models consider historical crop performance data, current agricultural research, and real-time field conditions to generate tailored crop recommendations. This implementation gets 99% accuracy.Keywords: Iot, MQTT protocol, machine learning, sensor, publish, subscriber, agriculture, humidity
Procedia PDF Downloads 694803 The Importance of Organized and Non-Organized Bildung for a Comprehensive Term of Bildung
Authors: Christine Pichler
Abstract:
The German word Bildung in a comprehensive understanding can be defined as the development of the personality and as a process, which lasts from birth, or even before birth, until death. Gaining experience, acquiring abilities and knowledge as a lifelong learning process is what Bildung means. The development of the personality is intransitive because of the personality’s development itself, and transitive because of influences on the formation of a person by individuals and institutions. In public and political discussions, the term Bildung is understood with a constricted usage as education at schools. This leads to the research question, which consequences this limited comprehension of the term Bildung implies and how a comprehensive term of Bildung has to be defined. In discussions, Bildung is limited to its formal part. The limited understanding prevents from accurate analyses and discussions as well as adequate actions. This hypothesis and the research issue will be processed by theoretical analyses of the factors of Bildung, guideline-controlled expert interviews and a qualitative content analysis. The limited understanding on the term Bildung is a methodological problem. This results in inaccuracies in the analysis of the processes of Bildung and their effects on the development of personality structures. On the one hand, an individual is influenced by formal structures in the system of Bildung (e.g. schools) and on the other hand an individual is influenced by gained individual and informal personality and character attributes. In general, too little attention is given to these attributes and individual qualifications. The aim of this work is to demonstrate informative terms so the educational process with all its facets could be considered and applicable analyses can be made. If the informative terms can be defined, it´s also possible to identify and discuss the components of a comprehensive term Bildung to enable correct action.Keywords: Bildung, development of personality, education, formative process, organized and non-organized Bildung
Procedia PDF Downloads 1214802 Shelf Life of Frozen Processed Foods for Extended Durability
Authors: Manfreda Gerardo, Pasquali Frederique, Pepe Tiziana, Anastasio Aniello, Ianieri Adriana
Abstract:
The aim of the research was to evaluate the shelf life of a REPFED’s product (lasagna alla bolognese), developed as a product to be marketed fresh after defrosting. Three different samples were prepared: A, B and C, which presented differences in relation to the recipe, pasteurization technique and packaging on which the trend of the shelf-life indicator parameters was evaluated during a period of prolonged shelf life. The analytical plan involved the measurement of microbiological, chemical-physical and organoleptic parameters over 7 moments of storage selected in a period of 33 days. CBT, LAB, enterobacteria, E. coli, yeasts, molds, S. coagulase positive, B. cereus, Salmonella spp and L. monocytogenes, pH, Aw, Kreiss test, peroxides, atmosphere inside the packages, and organoleptic characteristics were determined. The results demonstrated the effect of post-packaging pasteurization on the shelf life of fresh from frozen products. However, the products pasteurized at 95°C in the absence of steam showed microbiological parameters that were not appropriate for an extended shelf life of up to 60 days. On the contrary, the samples pasteurized at 98°C with steam saturation and counterpressure showed values compatible with an extended shelf life. The results of the chemical-physical analyses highlighted how recipe and packaging affect the chemical-physical and organoleptic parameters. In conclusion, this preliminary study confirmed the effectiveness of post-packaging pasteurization treatments aimed at extending the shelf life of the product, helping the food company to occupy market niches even very distant from the production sites.Keywords: shelf life, REPFED’s product, extended durability, pasteurization
Procedia PDF Downloads 284801 Life Prediction Method of Lithium-Ion Battery Based on Grey Support Vector Machines
Authors: Xiaogang Li, Jieqiong Miao
Abstract:
As for the problem of the grey forecasting model prediction accuracy is low, an improved grey prediction model is put forward. Firstly, use trigonometric function transform the original data sequence in order to improve the smoothness of data , this model called SGM( smoothness of grey prediction model), then combine the improved grey model with support vector machine , and put forward the grey support vector machine model (SGM - SVM).Before the establishment of the model, we use trigonometric functions and accumulation generation operation preprocessing data in order to enhance the smoothness of the data and weaken the randomness of the data, then use support vector machine (SVM) to establish a prediction model for pre-processed data and select model parameters using genetic algorithms to obtain the optimum value of the global search. Finally, restore data through the "regressive generate" operation to get forecasting data. In order to prove that the SGM-SVM model is superior to other models, we select the battery life data from calce. The presented model is used to predict life of battery and the predicted result was compared with that of grey model and support vector machines.For a more intuitive comparison of the three models, this paper presents root mean square error of this three different models .The results show that the effect of grey support vector machine (SGM-SVM) to predict life is optimal, and the root mean square error is only 3.18%. Keywords: grey forecasting model, trigonometric function, support vector machine, genetic algorithms, root mean square errorKeywords: Grey prediction model, trigonometric functions, support vector machines, genetic algorithms, root mean square error
Procedia PDF Downloads 4614800 Reconstruction of Visual Stimuli Using Stable Diffusion with Text Conditioning
Authors: ShyamKrishna Kirithivasan, Shreyas Battula, Aditi Soori, Richa Ramesh, Ramamoorthy Srinath
Abstract:
The human brain, among the most complex and mysterious aspects of the body, harbors vast potential for extensive exploration. Unraveling these enigmas, especially within neural perception and cognition, delves into the realm of neural decoding. Harnessing advancements in generative AI, particularly in Visual Computing, seeks to elucidate how the brain comprehends visual stimuli observed by humans. The paper endeavors to reconstruct human-perceived visual stimuli using Functional Magnetic Resonance Imaging (fMRI). This fMRI data is then processed through pre-trained deep-learning models to recreate the stimuli. Introducing a new architecture named LatentNeuroNet, the aim is to achieve the utmost semantic fidelity in stimuli reconstruction. The approach employs a Latent Diffusion Model (LDM) - Stable Diffusion v1.5, emphasizing semantic accuracy and generating superior quality outputs. This addresses the limitations of prior methods, such as GANs, known for poor semantic performance and inherent instability. Text conditioning within the LDM's denoising process is handled by extracting text from the brain's ventral visual cortex region. This extracted text undergoes processing through a Bootstrapping Language-Image Pre-training (BLIP) encoder before it is injected into the denoising process. In conclusion, a successful architecture is developed that reconstructs the visual stimuli perceived and finally, this research provides us with enough evidence to identify the most influential regions of the brain responsible for cognition and perception.Keywords: BLIP, fMRI, latent diffusion model, neural perception.
Procedia PDF Downloads 694799 Numerical Solution of Portfolio Selecting Semi-Infinite Problem
Authors: Alina Fedossova, Jose Jorge Sierra Molina
Abstract:
SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution
Procedia PDF Downloads 3094798 Treatment of a Galvanization Wastewater in a Fixed-Bed Column Using L. hyperborean and P. canaliculata Macroalgae as Natural Cation Exchangers
Authors: Tatiana A. Pozdniakova, Maria A. P. Cechinel, Luciana P. Mazur, Rui A. R. Boaventura, Vitor J. P. Vilar.
Abstract:
Two brown macroalgae, Laminaria hyperborea and Pelvetia canaliculata, were employed as natural cation exchangers in a fixed-bed column for Zn(II) removal from a galvanization wastewater. The column (4.8 cm internal diameter) was packed with 30-59 g of previously hydrated algae up to a bed height of 17-27 cm. The wastewater or eluent was percolated using a peristaltic pump at a flow rate of 10 mL/min. The effluent used in each experiment presented similar characteristics: pH of 6.7, 55 mg/L of chemical oxygen demand and about 300, 44, 186 and 244 mg/L of sodium, calcium, chloride and sulphate ions, respectively. The main difference was nitrate concentration: 20 mg/L for the effluent used with L. hyperborean and 341 mg/L for the effluent used with P. canaliculata. The inlet zinc concentration also differed slightly: 11.2 mg/L for L. hyperborean and 8.9 mg/L for P. canaliculata experiments. The breakthrough time was approximately 22.5 hours for both macroalgae, corresponding to a service capacity of 43 bed volumes. This indicates that 30 g of biomass is able to treat 13.5 L of the galvanization wastewater. The uptake capacities at the saturation point were similar to that obtained in batch studies (unpublished data) for both algae. After column exhaustion, desorption with 0.1 M HNO3 was performed. Desorption using 9 and 8 bed volumes of eluent achieved an efficiency of 100 and 91%, respectively for L. hyperborean and P. canaliculata. After elution with nitric acid, the column was regenerated using different strategies: i) convert all the binding sites in the sodium form, by passing a solution of 0.5 M NaCl, until achieve a final pH of 6.0; ii) passing only tap water in order to increase the solution pH inside the column until pH 3.0, and in this case the second sorption cycle was performed using protonated algae. In the first approach, in order to remove the excess of salt inside the column, distilled water was passed through the column, leading to the algae structure destruction and the column collapsed. Using the second approach, the algae remained intact during three consecutive sorption/desorption cycles without loss of performance.Keywords: biosorption, zinc, galvanization wastewater, packed-bed column
Procedia PDF Downloads 3124797 The Optimal Utilization of Centrally Located Land: The Case of the Bloemfontein Show Grounds
Authors: D. F. Coetzee, M. M. Campbell
Abstract:
The urban environment is constantly expanding and the optimal use of centrally located land is important in terms of sustainable development. Bloemfontein has expanded and this affects land-use functions. The purpose of the study is to examine the possible shift in location of the Bloemfontein show grounds to utilize the space of the grounds more effectively in context of spatial planning. The research method used is qualitative case study research with the case study on the Bloemfontein show grounds. The purposive sample consisted of planners who work or consult in the Bloemfontein area and who are registered with the South African Council for Planners (SACPLAN). Interviews consisting of qualitative open-ended questionnaires were used. When considering relocation the social and economic aspects need to be considered. The findings also indicated a majority consensus that the property can be utilized more effectively in terms of mixed land use. The showground development trust compiled a master plan to ensure that the property is used to its full potential without the relocation of the showground function itself. This Master Plan can be seen as the next logical step for the showground property itself, and it is indeed an attempt to better utilize the land parcel without relocating the show function. The question arises whether the proposed Master Plan is a permanent solution or whether it is merely delaying the relocation of the core showground function to another location. For now, it is a sound solution, making the best out of the situation at hand and utilizing the property more effectively. If the show grounds were to be relocated the researcher proposed a recommendation of mixed-use development, in terms an expansion on the commercial business/retail, together with a sport and recreation function. The show grounds in Bloemfontein are well positioned to capitalize on and to meet the needs of the changing economy, while complimenting the future economic growth strategies of the city if the right plans are in place.Keywords: centrally located land, spatial planning, show grounds, central business district
Procedia PDF Downloads 4144796 Thermodynamics of Water Condensation on an Aqueous Organic-Coated Aerosol Aging via Chemical Mechanism
Authors: Yuri S. Djikaev
Abstract:
A large subset of aqueous aerosols can be initially (immediately upon formation) coated with various organic amphiphilic compounds whereof the hydrophilic moieties are attached to the aqueous aerosol core while the hydrophobic moieties are exposed to the air thus forming a hydrophobic coating thereupon. We study the thermodynamics of water condensation on such an aerosol whereof the hydrophobic organic coating is being concomitantly processed by chemical reactions with atmospheric reactive species. Such processing (chemical aging) enables the initially inert aerosol to serve as a nucleating center for water condensation. The most probable pathway of such aging involves atmospheric hydroxyl radicals that abstract hydrogen atoms from hydrophobic moieties of surface organics (first step), the resulting radicals being quickly oxidized by ubiquitous atmospheric oxygen molecules to produce surface-bound peroxyl radicals (second step). Taking these two reactions into account, we derive an expression for the free energy of formation of an aqueous droplet on an organic-coated aerosol. The model is illustrated by numerical calculations. The results suggest that the formation of aqueous cloud droplets on such aerosols is most likely to occur via Kohler activation rather than via nucleation. The model allows one to determine the threshold parameters necessary for their Kohler activation. Numerical results also corroborate previous suggestions that one can neglect some details of aerosol chemical composition in investigating aerosol effects on climate.Keywords: aqueous aerosols, organic coating, chemical aging, cloud condensation nuclei, Kohler activation, cloud droplets
Procedia PDF Downloads 3954795 Formulation and Evaluation of Metformin Hydrochloride Microparticles via BÜCHI Nano-Spray Dryer B-90
Authors: Tamer Shehata
Abstract:
Recently, nanotechnology acquired a great interest in the field of pharmaceutical production. Several pharmaceutical equipment were introduced into the research field for production of nanoparticles, among them, BÜCHI’ fourth generation nano-spray dryer B-90. B-90 is specialized with single step of production and drying of nano and microparticles. Currently, our research group is investigating several pharmaceutical formulations utilizing BÜCHI Nano-Spray Dryer B-90 technology. One of our projects is the formulation and evaluation of metformin hydrochloride mucoadhesive microparticles for treatment of type 2-diabetis. Several polymers were investigated, among them, gelatin and sodium alginate. The previous polymers are natural polymers with mucoadhesive properties. Preformulation studies such as atomization head mesh size, flow rate, head temperature, polymer solution viscosity and surface tension were performed. Postformulation characters such as particle size, flowability, surface scan and dissolution profile were evaluated. Finally, the pharmacological activity of certain selected formula was evaluated in streptozotocin-induced diabetic rats. B-90’spray head was 7 µm hole heated to 120 with air flow rate 3.5 mL/min. The viscosity of the solution was less than 11.5 cP with surface tension less than 70.1 dyne/cm. Successfully, discrete, non-aggregated particles and free flowing powders with particle size was less than 2000 nm were obtained. Gelatin and Sodium alginate combination in ratio 1:3 were successfully sustained the in vitro release profile of the drug. Hypoglycemic evaluation of the previous formula showed a significant reduction of blood glucose level over 24 h. In conclusion, mucoadhesive metformin hydrochloride microparticles obtained from B-90 could offer a convenient dosage form with enhanced hypoglycemic activity.Keywords: mucoadhesive, microparticles, metformin hydrochloride, nano-spray dryer
Procedia PDF Downloads 3114794 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing
Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca de Marchi
Abstract:
This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes available a larger monitored area. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary chars the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.Keywords: data compression, ultrasonic communication, guided waves, FEM analysis
Procedia PDF Downloads 1244793 Porous Alumina-Carbon Nanotubes Nanocomposite Membranes Processed via Spark Plasma Sintering for Heavy Metal Removal from Contaminated Water
Authors: H. K. Shahzad, M. A. Hussein, F. Patel, N. Al-Aqeeli, T. Laoui
Abstract:
The purpose of the present study was to use the adsorption mechanism with microfiltration synergistically for efficient heavy metal removal from contaminated water. Alumina (Al2O3) is commonly used for ceramic membranes development while recently carbon nanotubes (CNTs) have been considered among the best adsorbent materials for heavy metals. In this work, we combined both of these materials to prepare porous Al2O3-CNTs nanocomposite membranes via Spark Plasma Sintering (SPS) technique. Alumina was used as a base matrix while CNTs were added as filler. The SPS process parameters i.e. applied pressure, temperature, heating rate, and holding time were varied to obtain the best combination of porosity (64%, measured according to ASTM c373-14a) and strength (3.2 MPa, measured by diametrical compression test) of the developed membranes. The prepared membranes were characterized using X-ray diffraction (XRD), field emission secondary electron microscopy (FE-SEM), contact angle and porosity measurements. The results showed that properties of the synthesized membranes were highly influenced by the SPS process parameters. FE-SEM images revealed that CNTs were reasonably dispersed in the alumina matrix. The porous membranes were evaluated for their water flux transport as well as their capacity to adsorb heavy metals ions. Selected membranes were able to remove about 97% cadmium from contaminated water. Further work is underway to enhance the removal efficiency of the developed membranes as well as to remove other heavy metals such as arsenic and mercury.Keywords: heavy metal removal, inorganic membrane, nanocomposite, spark plasma sintering
Procedia PDF Downloads 2624792 Subsea Processing: Deepwater Operation and Production
Authors: Md Imtiaz, Sanchita Dei, Shubham Damke
Abstract:
In recent years, there has been a rapidly accelerating shift from traditional surface processing operations to subsea processing operation. This shift has been driven by a number of factors including the depletion of shallow fields around the world, technological advances in subsea processing equipment, the need for production from marginal fields, and lower initial upfront investment costs compared to traditional production facilities. Moving production facilities to the seafloor offers a number of advantage, including a reduction in field development costs, increased production rates from subsea wells, reduction in the need for chemical injection, minimization of risks to worker ,reduction in spills due to hurricane damage, and increased in oil production by enabling production from marginal fields. Subsea processing consists of a range of technologies for separation, pumping, compression that enables production from offshore well without the need for surface facilities. At present, there are two primary technologies being used for subsea processing: subsea multiphase pumping and subsea separation. Multiphase pumping is the most basic subsea processing technology. Multiphase pumping involves the use of boosting system to transport the multiphase mixture through pipelines to floating production vessels. The separation system is combined with single phase pumps or water would be removed and either pumped to the surface, re-injected, or discharged to the sea. Subsea processing can allow for an entire topside facility to be decommissioned and the processed fluids to be tied back to a new, more distant, host. This type of application reduces costs and increased both overall facility and integrity and recoverable reserve. In future, full subsea processing could be possible, thereby eliminating the need for surface facilities.Keywords: FPSO, marginal field, Subsea processing, SWAG
Procedia PDF Downloads 4134791 On Bianchi Type Cosmological Models in Lyra’s Geometry
Authors: R. K. Dubey
Abstract:
Bianchi type cosmological models have been studied on the basis of Lyra’s geometry. Exact solution has been obtained by considering a time dependent displacement field for constant deceleration parameter and varying cosmological term of the universe. The physical behavior of the different models has been examined for different cases.Keywords: Bianchi type-I cosmological model, variable gravitational coupling, cosmological constant term, Lyra's model
Procedia PDF Downloads 3544790 Indoor Air Quality Analysis for Renovating Building: A Case Study of Student Studio, Department of Landscape, Chiangmai, Thailand
Authors: Warangkana Juangjandee
Abstract:
The rapidly increasing number of population in the limited area creates an effect on the idea of the improvement of the area to suit the environment and the needs of people. Faculty of architecture Chiang Mai University is also expanding in both variety fields of study and quality of education. In 2020, the new department will be introduced in the faculty which is Department of Landscape Architecture. With the limitation of the area in the existing building, the faculty plan to renovate some parts of its school for anticipates the number of students who will join the program in the next two years. As a result, the old wooden workshop area is selected to be renovated as student studio space. With such condition, it is necessary to study the restriction and the distinctive environment of the site prior to the improvement in order to find ways to manage the existing space due to the fact that the primary functions that have been practiced in the site, an old wooden workshop space and the new function, studio space, are too different. 72.9% of the annual times in the room are considered to be out of the thermal comfort condition with high relative humidity. This causes non-comfort condition for occupants which could promote mould growth. This study aims to analyze thermal comfort condition in the Landscape Learning Studio Area for finding the solution to improve indoor air quality and respond to local conditions. The research methodology will be in two parts: 1) field gathering data on the case study 2) analysis and finding the solution of improving indoor air quality. The result of the survey indicated that the room needs to solve non-comfort condition problem. This can be divided into two ways which are raising ventilation and indoor temperature, e.g. improving building design and stack driven ventilation, using fan for enhancing more internal ventilation.Keywords: relative humidity, renovation, temperature, thermal comfort
Procedia PDF Downloads 2164789 Microbial Degradation of Lignin for Production of Valuable Chemicals
Authors: Fnu Asina, Ivana Brzonova, Keith Voeller, Yun Ji, Alena Kubatova, Evguenii Kozliak
Abstract:
Lignin, a heterogeneous three-dimensional biopolymer, is one of the building blocks of lignocellulosic biomass. Due to its limited chemical reactivity, lignin is currently processed as a low-value by-product in pulp and paper mills. Among various industrial lignins, Kraft lignin represents a major source of by-products generated during the widely employed pulping process across the pulp and paper industry. Therefore, valorization of Kraft lignin holds great potential as this would provide a readily available source of aromatic compounds for various industrial applications. Microbial degradation is well known for using both highly specific ligninolytic enzymes secreted by microorganisms and mild operating conditions compared with conventional chemical approaches. In this study, the degradation of Indulin AT lignin was assessed by comparing the effects of Basidiomycetous fungi (Coriolus versicolour and Trametes gallica) and Actinobacteria (Mycobacterium sp. and Streptomyces sp.) to two commercial laccases, T. versicolour ( ≥ 10 U/mg) and C. versicolour ( ≥ 0.3 U/mg). After 54 days of cultivation, the extent of microbial degradation was significantly higher than that of commercial laccases, reaching a maximum of 38 wt% degradation for C. versicolour treated samples. Lignin degradation was further confirmed by thermal carbon analysis with a five-step temperature protocol. Compared with commercial laccases, a significant decrease in char formation at 850ºC was observed among all microbial-degraded lignins with a corresponding carbon percentage increase from 200ºC to 500ºC. To complement the carbon analysis result, chemical characterization of the degraded products at different stages of the delignification by microorganisms and commercial laccases was performed by Pyrolysis-GC-MS.Keywords: lignin, microbial degradation, pyrolysis-GC-MS, thermal carbon analysis
Procedia PDF Downloads 4124788 Fabrication of Electrospun Green Fluorescent Protein Nano-Fibers for Biomedical Applications
Authors: Yakup Ulusu, Faruk Ozel, Numan Eczacioglu, Abdurrahman Ozen, Sabriye Acikgoz
Abstract:
GFP discovered in the mid-1970s, has been used as a marker after replicated genetic study by scientists. In biotechnology, cell, molecular biology, the GFP gene is frequently used as a reporter of expression. In modified forms, it has been used to make biosensors. Many animals have been created that express GFP as an evidence that a gene can be expressed throughout a given organism. Proteins labeled with GFP identified locations are determined. And so, cell connections can be monitored, gene expression can be reported, protein-protein interactions can be observed and signals that create events can be detected. Additionally, monitoring GFP is noninvasive; it can be detected by under UV-light because of simply generating fluorescence. Moreover, GFP is a relatively small and inert molecule, that does not seem to treat any biological processes of interest. The synthesis of GFP has some steps like, to construct the plasmid system, transformation in E. coli, production and purification of protein. GFP carrying plasmid vector pBAD–GFPuv was digested using two different restriction endonuclease enzymes (NheI and Eco RI) and DNA fragment of GFP was gel purified before cloning. The GFP-encoding DNA fragment was ligated into pET28a plasmid using NheI and Eco RI restriction sites. The final plasmid was named pETGFP and DNA sequencing of this plasmid indicated that the hexa histidine-tagged GFP was correctly inserted. Histidine-tagged GFP was expressed in an Escherichia coli BL21 DE3 (pLysE) strain. The strain was transformed with pETGFP plasmid and grown on LuiraBertoni (LB) plates with kanamycin and chloramphenicol selection. E. coli cells were grown up to an optical density (OD 600) of 0.8 and induced by the addition of a final concentration of 1mM isopropyl-thiogalactopyranoside (IPTG) and then grown for additional 4 h. The amino-terminal hexa-histidine-tag facilitated purification of the GFP by using a His Bind affinity chromatography resin (Novagen). Purity of GFP protein was analyzed by a 12 % sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE). The concentration of protein was determined by UV absorption at 280 nm (Varian Cary 50 Scan UV/VIS spectrophotometer). Synthesis of GFP-Polymer composite nanofibers was produced by using GFP solution (10mg/mL) and polymer precursor Polyvinylpyrrolidone, (PVP, Mw=1300000) as starting materials and template, respectively. For the fabrication of nanofibers with the different fiber diameter; a sol–gel solution comprising of 0.40, 0.60 and 0.80 g PVP (depending upon the desired fiber diameter) and 100 mg GFP in 10 mL water: ethanol (3:2) mixtures were prepared and then the solution was covered on collecting plate via electro spinning at 10 kV with a feed-rate of 0.25 mL h-1 using Spellman electro spinning system. Results show that GFP-based nano-fiber can be used plenty of biomedical applications such as bio-imaging, bio-mechanic, bio-material and tissue engineering.Keywords: biomaterial, GFP, nano-fibers, protein expression
Procedia PDF Downloads 3204787 Evaluation of Academic Research Projects Using the AHP and TOPSIS Methods
Authors: Murat Arıbaş, Uğur Özcan
Abstract:
Due to the increasing number of universities and academics, the fund of the universities for research activities and grants/supports given by government institutions have increased number and quality of academic research projects. Although every academic research project has a specific purpose and importance, limited resources (money, time, manpower etc.) require choosing the best ones from all (Amiri, 2010). It is a pretty hard process to compare and determine which project is better such that the projects serve different purposes. In addition, the evaluation process has become complicated since there are more than one evaluator and multiple criteria for the evaluation (Dodangeh, Mojahed and Yusuff, 2009). Mehrez and Sinuany-Stern (1983) determined project selection problem as a Multi Criteria Decision Making (MCDM) problem. If a decision problem involves multiple criteria and objectives, it is called as a Multi Attribute Decision Making problem (Ömürbek & Kınay, 2013). There are many MCDM methods in the literature for the solution of such problems. These methods are AHP (Analytic Hierarchy Process), ANP (Analytic Network Process), TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation), UTADIS (Utilities Additives Discriminantes), ELECTRE (Elimination et Choix Traduisant la Realite), MAUT (Multiattribute Utility Theory), GRA (Grey Relational Analysis) etc. Teach method has some advantages compared with others (Ömürbek, Blacksmith & Akalın, 2013). Hence, to decide which MCDM method will be used for solution of the problem, factors like the nature of the problem, types of choices, measurement scales, type of uncertainty, dependency among the attributes, expectations of decision maker, and quantity and quality of the data should be considered (Tavana & Hatami-Marbini, 2011). By this study, it is aimed to develop a systematic decision process for the grant support applications that are expected to be evaluated according to their scientific adequacy by multiple evaluators under certain criteria. In this context, project evaluation process applied by The Scientific and Technological Research Council of Turkey (TÜBİTAK) the leading institutions in our country, was investigated. Firstly in the study, criteria that will be used on the project evaluation were decided. The main criteria were selected among TÜBİTAK evaluation criteria. These criteria were originality of project, methodology, project management/team and research opportunities and extensive impact of project. Moreover, for each main criteria, 2-4 sub criteria were defined, hence it was decided to evaluate projects over 13 sub-criterion in total. Due to superiority of determination criteria weights AHP method and provided opportunity ranking great number of alternatives TOPSIS method, they are used together. AHP method, developed by Saaty (1977), is based on selection by pairwise comparisons. Because of its simple structure and being easy to understand, AHP is the very popular method in the literature for determining criteria weights in MCDM problems. Besides, the TOPSIS method developed by Hwang and Yoon (1981) as a MCDM technique is an alternative to ELECTRE method and it is used in many areas. In the method, distance from each decision point to ideal and to negative ideal solution point was calculated by using Euclidian Distance Approach. In the study, main criteria and sub-criteria were compared on their own merits by using questionnaires that were developed based on an importance scale by four relative groups of people (i.e. TUBITAK specialists, TUBITAK managers, academics and individuals from business world ) After these pairwise comparisons, weight of the each main criteria and sub-criteria were calculated by using AHP method. Then these calculated criteria’ weights used as an input in TOPSİS method, a sample consisting 200 projects were ranked on their own merits. This new system supported to opportunity to get views of the people that take part of project process including preparation, evaluation and implementation on the evaluation of academic research projects. Moreover, instead of using four main criteria in equal weight to evaluate projects, by using weighted 13 sub-criteria and decision point’s distance from the ideal solution, systematic decision making process was developed. By this evaluation process, new approach was created to determine importance of academic research projects.Keywords: Academic projects, Ahp method, Research projects evaluation, Topsis method.
Procedia PDF Downloads 5904786 A Computational Framework for Decoding Hierarchical Interlocking Structures with SL Blocks
Authors: Yuxi Liu, Boris Belousov, Mehrzad Esmaeili Charkhab, Oliver Tessmann
Abstract:
This paper presents a computational solution for designing reconfigurable interlocking structures that are fully assembled with SL Blocks. Formed by S-shaped and L-shaped tetracubes, SL Block is a specific type of interlocking puzzle. Analogous to molecular self-assembly, the aggregation of SL blocks will build a reversible hierarchical and discrete system where a single module can be numerously replicated to compose semi-interlocking components that further align, wrap, and braid around each other to form complex high-order aggregations. These aggregations can be disassembled and reassembled, responding dynamically to design inputs and changes with a unique capacity for reconfiguration. To use these aggregations as architectural structures, we developed computational tools that automate the configuration of SL blocks based on architectural design objectives. There are three critical phases in our work. First, we revisit the hierarchy of the SL block system and devise a top-down-type design strategy. From this, we propose two key questions: 1) How to translate 3D polyominoes into SL block assembly? 2) How to decompose the desired voxelized shapes into a set of 3D polyominoes with interlocking joints? These two questions can be considered the Hamiltonian path problem and the 3D polyomino tiling problem. Then, we derive our solution to each of them based on two methods. The first method is to construct the optimal closed path from an undirected graph built from the voxelized shape and translate the node sequence of the resulting path into the assembly sequence of SL blocks. The second approach describes interlocking relationships of 3D polyominoes as a joint connection graph. Lastly, we formulate the desired shapes and leverage our methods to achieve their reconfiguration within different levels. We show that our computational strategy will facilitate the efficient design of hierarchical interlocking structures with a self-replicating geometric module.Keywords: computational design, SL-blocks, 3D polyomino puzzle, combinatorial problem
Procedia PDF Downloads 1294785 Rd-PLS Regression: From the Analysis of Two Blocks of Variables to Path Modeling
Authors: E. Tchandao Mangamana, V. Cariou, E. Vigneau, R. Glele Kakai, E. M. Qannari
Abstract:
A new definition of a latent variable associated with a dataset makes it possible to propose variants of the PLS2 regression and the multi-block PLS (MB-PLS). We shall refer to these variants as Rd-PLS regression and Rd-MB-PLS respectively because they are inspired by both Redundancy analysis and PLS regression. Usually, a latent variable t associated with a dataset Z is defined as a linear combination of the variables of Z with the constraint that the length of the loading weights vector equals 1. Formally, t=Zw with ‖w‖=1. Denoting by Z' the transpose of Z, we define herein, a latent variable by t=ZZ’q with the constraint that the auxiliary variable q has a norm equal to 1. This new definition of a latent variable entails that, as previously, t is a linear combination of the variables in Z and, in addition, the loading vector w=Z’q is constrained to be a linear combination of the rows of Z. More importantly, t could be interpreted as a kind of projection of the auxiliary variable q onto the space generated by the variables in Z, since it is collinear to the first PLS1 component of q onto Z. Consider the situation in which we aim to predict a dataset Y from another dataset X. These two datasets relate to the same individuals and are assumed to be centered. Let us consider a latent variable u=YY’q to which we associate the variable t= XX’YY’q. Rd-PLS consists in seeking q (and therefore u and t) so that the covariance between t and u is maximum. The solution to this problem is straightforward and consists in setting q to the eigenvector of YY’XX’YY’ associated with the largest eigenvalue. For the determination of higher order components, we deflate X and Y with respect to the latent variable t. Extending Rd-PLS to the context of multi-block data is relatively easy. Starting from a latent variable u=YY’q, we consider its ‘projection’ on the space generated by the variables of each block Xk (k=1, ..., K) namely, tk= XkXk'YY’q. Thereafter, Rd-MB-PLS seeks q in order to maximize the average of the covariances of u with tk (k=1, ..., K). The solution to this problem is given by q, eigenvector of YY’XX’YY’, where X is the dataset obtained by horizontally merging datasets Xk (k=1, ..., K). For the determination of latent variables of order higher than 1, we use a deflation of Y and Xk with respect to the variable t= XX’YY’q. In the same vein, extending Rd-MB-PLS to the path modeling setting is straightforward. Methods are illustrated on the basis of case studies and performance of Rd-PLS and Rd-MB-PLS in terms of prediction is compared to that of PLS2 and MB-PLS.Keywords: multiblock data analysis, partial least squares regression, path modeling, redundancy analysis
Procedia PDF Downloads 1474784 Crustal Deformation Study across the Chite Fault Using GPS Measurements in North East India along the Indo Burmese Arc
Authors: Malsawmtluanga, J. Malsawma, R. P. Tiwari, V. K. Gahalaut
Abstract:
North East India is seismically one of the six most active regions of the world. It is placed in Zone V, the highest zone in the seismic zonation of India. It lies at the junction of Himalayan arc to the north and the Burmese arc to the east. The region has witnessed at least 18 large earthquakes including two great earthquakes Shillong (1987, M=8.7) and the Assam Tibet border (1950, M=8.7).The prominent Chite fault lies at the heart of Aizawl, the capital of Mizoram state and this hilly city is the home to about 2 million people. Geologically the area is a part of the Indo-Burmese Wedge and is prone to natural and man-made disasters. Unplanned constructions and urban dwellings on a rapid scale have lead to numerous unsafe structures adversely affecting the ongoing development and welfare projects of the government and they pose a huge threat for earthquakes. Crustal deformation measurements using campaign mode GPS were undertaken across this fault. Campaign mode GPS data were acquired and were processed with GAMIT-GLOBK software. The study presents the current velocity estimates at all the sites in ITRF 2008 and also in the fixed Indian reference frame. The site motion showed that there appears to be no differential motion anywhere across the fault area, thus confirming presently the fault is neither accumulating strain nor slipping aseismically. From the geological and geomorphological evidence, supported by geodetic measurements, lack of historic earthquakes, the Chite fault favours aseismic behaviour in this part of the Indo Burmese Arc (IBA).Keywords: Chite fault, crustal deformation, geodesy, GPS, IBA
Procedia PDF Downloads 2474783 Multidisciplinary Approach for a Tsunami Reconstruction Plan in Coquimbo, Chile
Authors: Ileen Van den Berg, Reinier J. Daals, Chris E. M. Heuberger, Sven P. Hildering, Bob E. Van Maris, Carla M. Smulders, Rafael Aránguiz
Abstract:
Chile is located along the subduction zone of the Nazca plate beneath the South American plate, where large earthquakes and tsunamis have taken place throughout history. The last significant earthquake (Mw 8.2) occurred in September 2015 and generated a destructive tsunami, which mainly affected the city of Coquimbo (71.33°W, 29.96°S). The inundation area consisted of a beach, damaged seawall, damaged railway, wetland and old neighborhood; therefore, local authorities started a reconstruction process immediately after the event. Moreover, a seismic gap has been identified in the same area, and another large event could take place in the near future. The present work proposed an integrated tsunami reconstruction plan for the city of Coquimbo that considered several variables such as safety, nature & recreation, neighborhood welfare, visual obstruction, infrastructure, construction process, and durability & maintenance. Possible future tsunami scenarios are simulated by means of the Non-hydrostatic Evolution of Ocean WAVEs (NEOWAVE) model with 5 nested grids and a higher grid resolution of ~10 m. Based on the score from a multi-criteria analysis, the costs of the alternatives and a preference for a multifunctional solution, the alternative that includes an elevated coastal road with floodgates to reduce tsunami overtopping and control the return flow of a tsunami was selected as the best solution. It was also observed that the wetlands are significantly restored to their former configuration; moreover, the dynamic behavior of the wetlands is stimulated. The numerical simulation showed that the new coastal protection decreases damage and the probability of loss of life by delaying tsunami arrival time. In addition, new evacuation routes and a smaller inundation zone in the city increase safety for the area.Keywords: tsunami, Coquimbo, Chile, reconstruction, numerical simulation
Procedia PDF Downloads 241