Search results for: real property
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6674

Search results for: real property

1454 Integrating Data Mining within a Strategic Knowledge Management Framework: A Platform for Sustainable Competitive Advantage within the Australian Minerals and Metals Mining Sector

Authors: Sanaz Moayer, Fang Huang, Scott Gardner

Abstract:

In the highly leveraged business world of today, an organisation’s success depends on how it can manage and organize its traditional and intangible assets. In the knowledge-based economy, knowledge as a valuable asset gives enduring capability to firms competing in rapidly shifting global markets. It can be argued that ability to create unique knowledge assets by configuring ICT and human capabilities, will be a defining factor for international competitive advantage in the mid-21st century. The concept of KM is recognized in the strategy literature, and increasingly by senior decision-makers (particularly in large firms which can achieve scalable benefits), as an important vehicle for stimulating innovation and organisational performance in the knowledge economy. This thinking has been evident in professional services and other knowledge intensive industries for over a decade. It highlights the importance of social capital and the value of the intellectual capital embedded in social and professional networks, complementing the traditional focus on creation of intellectual property assets. Despite the growing interest in KM within professional services there has been limited discussion in relation to multinational resource based industries such as mining and petroleum where the focus has been principally on global portfolio optimization with economies of scale, process efficiencies and cost reduction. The Australian minerals and metals mining industry, although traditionally viewed as capital intensive, employs a significant number of knowledge workers notably- engineers, geologists, highly skilled technicians, legal, finance, accounting, ICT and contracts specialists working in projects or functions, representing potential knowledge silos within the organisation. This silo effect arguably inhibits knowledge sharing and retention by disaggregating corporate memory, with increased operational and project continuity risk. It also may limit the potential for process, product, and service innovation. In this paper the strategic application of knowledge management incorporating contemporary ICT platforms and data mining practices is explored as an important enabler for knowledge discovery, reduction of risk, and retention of corporate knowledge in resource based industries. With reference to the relevant strategy, management, and information systems literature, this paper highlights possible connections (currently undergoing empirical testing), between an Strategic Knowledge Management (SKM) framework incorporating supportive Data Mining (DM) practices and competitive advantage for multinational firms operating within the Australian resource sector. We also propose based on a review of the relevant literature that more effective management of soft and hard systems knowledge is crucial for major Australian firms in all sectors seeking to improve organisational performance through the human and technological capability captured in organisational networks.

Keywords: competitive advantage, data mining, mining organisation, strategic knowledge management

Procedia PDF Downloads 415
1453 Lignin Pyrolysis to Value-Added Chemicals: A Mechanistic Approach

Authors: Binod Shrestha, Sandrine Hoppe, Thierry Ghislain, Phillipe Marchal, Nicolas Brosse, Anthony Dufour

Abstract:

The thermochemical conversion of lignin has received an increasing interest in the frame of different biorefinery concepts for the production of chemicals or energy. It is needed to better understand the physical and chemical conversion of lignin for feeder and reactor designs. In-situ rheology reveals the viscoelastic behaviour of lignin upon thermal conversion. The softening, re-solidification (char formation), swelling and shrinking behaviours are quantified during pyrolysis in real-time [1]. The in-situ rheology of an alkali lignin (Protobind 1000) was conducted in high torque controlled strain rheometer from 35°C to 400°C with a heating rate of 5°C.min-1. The swelling, through glass phase transition overlapped with depolymerization, and solidification (crosslinking and “char” formation) are two main phenomena observed during lignin pyrolysis. The onset of temperatures for softening and solidification for this lignin has been found to be 141°C and 248°C respectively. An ex-situ characterization of lignin/char residues obtained at different temperatures after quenching in the rheometer gives a clear understanding of the pathway of lignin degradation. The lignin residues were sampled from the mid-point temperatures of the softening range and solidification range to study the chemical transformations undergoing. Elemental analysis, FTIR and solid state NMR were conducted after quenching the solid residues (lignin/char). The quenched solid was also extracted by suitable solvent and followed by acetylation and GPC-UV analysis. The combination of 13C NMR and GPC-UV reveals the depolymerization followed by crosslinking of lignin/char. NMR and FTIR provide the evolution of functional moieties upon temperature. Physical and chemical mechanisms occurring during lignin pyrolysis are accounted in this study. Thanks to all these complementary methods.

Keywords: pyrolysis, bio-chemicals, valorization, mechanism, softening, solidification, cross linking, rheology, spectroscopic methods

Procedia PDF Downloads 424
1452 Implementation of Fuzzy Version of Block Backward Differentiation Formulas for Solving Fuzzy Differential Equations

Authors: Z. B. Ibrahim, N. Ismail, K. I. Othman

Abstract:

Fuzzy Differential Equations (FDEs) play an important role in modelling many real life phenomena. The FDEs are used to model the behaviour of the problems that are subjected to uncertainty, vague or imprecise information that constantly arise in mathematical models in various branches of science and engineering. These uncertainties have to be taken into account in order to obtain a more realistic model and many of these models are often difficult and sometimes impossible to obtain the analytic solutions. Thus, many authors have attempted to extend or modified the existing numerical methods developed for solving Ordinary Differential Equations (ODEs) into fuzzy version in order to suit for solving the FDEs. Therefore, in this paper, we proposed the development of a fuzzy version of three-point block method based on Block Backward Differentiation Formulas (FBBDF) for the numerical solution of first order FDEs. The three-point block FBBDF method are implemented in uniform step size produces three new approximations simultaneously at each integration step using the same back values. Newton iteration of the FBBDF is formulated and the implementation is based on the predictor and corrector formulas in the PECE mode. For greater efficiency of the block method, the coefficients of the FBBDF are stored at the start of the program. The proposed FBBDF is validated through numerical results on some standard problems found in the literature and comparisons are made with the existing fuzzy version of the Modified Simpson and Euler methods in terms of the accuracy of the approximated solutions. The numerical results show that the FBBDF method performs better in terms of accuracy when compared to the Euler method when solving the FDEs.

Keywords: block, backward differentiation formulas, first order, fuzzy differential equations

Procedia PDF Downloads 319
1451 Deterministic and Stochastic Modeling of a Micro-Grid Management for Optimal Power Self-Consumption

Authors: D. Calogine, O. Chau, S. Dotti, O. Ramiarinjanahary, P. Rasoavonjy, F. Tovondahiniriko

Abstract:

Mafate is a natural circus in the north-western part of Reunion Island, without an electrical grid and road network. A micro-grid concept is being experimented in this area, composed of a photovoltaic production combined with electrochemical batteries, in order to meet the local population for self-consumption of electricity demands. This work develops a discrete model as well as a stochastic model in order to reach an optimal equilibrium between production and consumptions for a cluster of houses. The management of the energy power leads to a large linearized programming system, where the time interval of interest is 24 hours The experimental data are solar production, storage energy, and the parameters of the different electrical devices and batteries. The unknown variables to evaluate are the consumptions of the various electrical services, the energy drawn from and stored in the batteries, and the inhabitants’ planning wishes. The objective is to fit the solar production to the electrical consumption of the inhabitants, with an optimal use of the energies in the batteries by satisfying as widely as possible the users' planning requirements. In the discrete model, the different parameters and solutions of the linear programming system are deterministic scalars. Whereas in the stochastic approach, the data parameters and the linear programming solutions become random variables, then the distributions of which could be imposed or established by estimation from samples of real observations or from samples of optimal discrete equilibrium solutions.

Keywords: photovoltaic production, power consumption, battery storage resources, random variables, stochastic modeling, estimations of probability distributions, mixed integer linear programming, smart micro-grid, self-consumption of electricity.

Procedia PDF Downloads 110
1450 A Methodology for Seismic Performance Enhancement of RC Structures Equipped with Friction Energy Dissipation Devices

Authors: Neda Nabid

Abstract:

Friction-based supplemental devices have been extensively used for seismic protection and strengthening of structures, however, the conventional use of these dampers may not necessarily lead to an efficient structural performance. Conventionally designed friction dampers follow a uniform height-wise distribution pattern of slip load values for more practical simplicity. This can lead to localizing structural damage in certain story levels, while the other stories accommodate a negligible amount of relative displacement demand. A practical performance-based optimization methodology is developed to tackle with structural damage localization of RC frame buildings with friction energy dissipation devices under severe earthquakes. The proposed methodology is based on the concept of uniform damage distribution theory. According to this theory, the slip load values of the friction dampers redistribute and shift from stories with lower relative displacement demand to the stories with higher inter-story drifts to narrow down the discrepancy between the structural damage levels in different stories. In this study, the efficacy of the proposed design methodology is evaluated through the seismic performance of five different low to high-rise RC frames equipped with friction wall dampers under six real spectrum-compatible design earthquakes. The results indicate that compared to the conventional design, using the suggested methodology to design friction wall systems can lead to, by average, up to 40% reduction of maximum inter-story drift; and incredibly more uniform height-wise distribution of relative displacement demands under the design earthquakes.

Keywords: friction damper, nonlinear dynamic analysis, RC structures, seismic performance, structural damage

Procedia PDF Downloads 226
1449 A Novel Method for Face Detection

Authors: H. Abas Nejad, A. R. Teymoori

Abstract:

Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.

Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model

Procedia PDF Downloads 338
1448 Intermediate-Term Impact of Taiwan High-Speed Rail (HSR) and Land Use on Spatial Patterns of HSR Travel

Authors: Tsai Yu-hsin, Chung Yi-Hsin

Abstract:

The employment of an HSR system, resulting in elevation in the inter-city/-region accessibility, is likely to promote spatial interaction between places in the HSR and extended territory. The inter-city/-region travel via HSR could be, among others, affected by the land use, transportation, and location of the HSR station at both trip origin and destination ends. However, relatively few insights have been shed on these impacts and spatial patterns of the HSR travel. The research purposes, as phase one of a series of HSR related research, of this study are threefold: to analyze the general spatial patterns of HSR trips, such as the spatial distribution of trip origins and destinations; to analyze if specific land use, transportation characteristics, and trip characteristics affect HSR trips in terms of the use of HSR, the distribution of trip origins and destinations, and; to analyze the socio-economic characteristics of HSR travelers. With the Taiwan HSR starting operation in 2007, this study emphasizes on the intermediate-term impact of HSR, which is made possible with the population and housing census and industry and commercial census data and a station area intercept survey conducted in the summer 2014. The analysis will be conducted at the city, inter-city, and inter-region spatial levels, as necessary and required. The analysis tools include descriptive statistics and multivariate analysis with the assistance of SPSS, HLM and ArcGIS. The findings, on the one hand, can provide policy implications for associated land use, transportation plan and the site selection of HSR station. On the other hand, on the travel the findings are expected to provide insights that can help explain how land use and real estate values could be affected by HSR in following phases of this series of research.

Keywords: high speed rail, land use, travel, spatial pattern

Procedia PDF Downloads 462
1447 Combination between Intrusion Systems and Honeypots

Authors: Majed Sanan, Mohammad Rammal, Wassim Rammal

Abstract:

Today, security is a major concern. Intrusion Detection, Prevention Systems and Honeypot can be used to moderate attacks. Many researchers have proposed to use many IDSs ((Intrusion Detection System) time to time. Some of these IDS’s combine their features of two or more IDSs which are called Hybrid Intrusion Detection Systems. Most of the researchers combine the features of Signature based detection methodology and Anomaly based detection methodology. For a signature based IDS, if an attacker attacks slowly and in organized way, the attack may go undetected through the IDS, as signatures include factors based on duration of the events but the actions of attacker do not match. Sometimes, for an unknown attack there is no signature updated or an attacker attack in the mean time when the database is updating. Thus, signature-based IDS fail to detect unknown attacks. Anomaly based IDS suffer from many false-positive readings. So there is a need to hybridize those IDS which can overcome the shortcomings of each other. In this paper we propose a new approach to IDS (Intrusion Detection System) which is more efficient than the traditional IDS (Intrusion Detection System). The IDS is based on Honeypot Technology and Anomaly based Detection Methodology. We have designed Architecture for the IDS in a packet tracer and then implemented it in real time. We have discussed experimental results performed: both the Honeypot and Anomaly based IDS have some shortcomings but if we hybridized these two technologies, the newly proposed Hybrid Intrusion Detection System (HIDS) is capable enough to overcome these shortcomings with much enhanced performance. In this paper, we present a modified Hybrid Intrusion Detection System (HIDS) that combines the positive features of two different detection methodologies - Honeypot methodology and anomaly based intrusion detection methodology. In the experiment, we ran both the Intrusion Detection System individually first and then together and recorded the data from time to time. From the data we can conclude that the resulting IDS are much better in detecting intrusions from the existing IDSs.

Keywords: security, intrusion detection, intrusion prevention, honeypot, anomaly-based detection, signature-based detection, cloud computing, kfsensor

Procedia PDF Downloads 383
1446 Performance Evaluation of Using Genetic Programming Based Surrogate Models for Approximating Simulation Complex Geochemical Transport Processes

Authors: Hamed K. Esfahani, Bithin Datta

Abstract:

Transport of reactive chemical contaminant species in groundwater aquifers is a complex and highly non-linear physical and geochemical process especially for real life scenarios. Simulating this transport process involves solving complex nonlinear equations and generally requires huge computational time for a given aquifer study area. Development of optimal remediation strategies in aquifers may require repeated solution of such complex numerical simulation models. To overcome this computational limitation and improve the computational feasibility of large number of repeated simulations, Genetic Programming based trained surrogate models are developed to approximately simulate such complex transport processes. Transport process of acid mine drainage, a hazardous pollutant is first simulated using a numerical simulated model: HYDROGEOCHEM 5.0 for a contaminated aquifer in a historic mine site. Simulation model solution results for an illustrative contaminated aquifer site is then approximated by training and testing a Genetic Programming (GP) based surrogate model. Performance evaluation of the ensemble GP models as surrogate models for the reactive species transport in groundwater demonstrates the feasibility of its use and the associated computational advantages. The results show the efficiency and feasibility of using ensemble GP surrogate models as approximate simulators of complex hydrogeologic and geochemical processes in a contaminated groundwater aquifer incorporating uncertainties in historic mine site.

Keywords: geochemical transport simulation, acid mine drainage, surrogate models, ensemble genetic programming, contaminated aquifers, mine sites

Procedia PDF Downloads 277
1445 Towards a Robust Patch Based Multi-View Stereo Technique for Textureless and Occluded 3D Reconstruction

Authors: Ben Haines, Li Bai

Abstract:

Patch based reconstruction methods have been and still are one of the top performing approaches to 3D reconstruction to date. Their local approach to refining the position and orientation of a patch, free of global minimisation and independent of surface smoothness, make patch based methods extremely powerful in recovering fine grained detail of an objects surface. However, patch based approaches still fail to faithfully reconstruct textureless or highly occluded surface regions thus though performing well under lab conditions, deteriorate in industrial or real world situations. They are also computationally expensive. Current patch based methods generate point clouds with holes in texturesless or occluded regions that require expensive energy minimisation techniques to fill and interpolate a high fidelity reconstruction. Such shortcomings hinder the adaptation of the methods for industrial applications where object surfaces are often highly textureless and the speed of reconstruction is an important factor. This paper presents on-going work towards a multi-resolution approach to address the problems, utilizing particle swarm optimisation to reconstruct high fidelity geometry, and increasing robustness to textureless features through an adapted approach to the normalised cross correlation. The work also aims to speed up the reconstruction using advances in GPU technologies and remove the need for costly initialization and expansion. Through the combination of these enhancements, it is the intention of this work to create denser patch clouds even in textureless regions within a reasonable time. Initial results show the potential of such an approach to construct denser point clouds with a comparable accuracy to that of the current top-performing algorithms.

Keywords: 3D reconstruction, multiview stereo, particle swarm optimisation, photo consistency

Procedia PDF Downloads 203
1444 Sensitivity Analysis of Prestressed Post-Tensioned I-Girder and Deck System

Authors: Tahsin A. H. Nishat, Raquib Ahsan

Abstract:

Sensitivity analysis of design parameters of the optimization procedure can become a significant factor while designing any structural system. The objectives of the study are to analyze the sensitivity of deck slab thickness parameter obtained from both the conventional and optimum design methodology of pre-stressed post-tensioned I-girder and deck system and to compare the relative significance of slab thickness. For analysis on conventional method, the values of 14 design parameters obtained by the conventional iterative method of design of a real-life I-girder bridge project have been considered. On the other side for analysis on optimization method, cost optimization of this system has been done using global optimization methodology 'Evolutionary Operation (EVOP)'. The problem, by which optimum values of 14 design parameters have been obtained, contains 14 explicit constraints and 46 implicit constraints. For both types of design parameters, sensitivity analysis has been conducted on deck slab thickness parameter which can become too sensitive for the obtained optimum solution. Deviations of slab thickness on both the upper and lower side of its optimum value have been considered reflecting its realistic possible ranges of variations during construction. In this procedure, the remaining parameters have been kept unchanged. For small deviations from the optimum value, compliance with the explicit and implicit constraints has been examined. Variations in the cost have also been estimated. It is obtained that without violating any constraint deck slab thickness obtained by the conventional method can be increased up to 25 mm whereas slab thickness obtained by cost optimization can be increased only up to 0.3 mm. The obtained result suggests that slab thickness becomes less sensitive in case of conventional method of design. Therefore, for realistic design purpose sensitivity should be conducted for any of the design procedure of girder and deck system.

Keywords: sensitivity analysis, optimum design, evolutionary operations, PC I-girder, deck system

Procedia PDF Downloads 137
1443 Rights, Differences and Inclusion: The Role of Transdisciplinary Approach in the Education for Diversity

Authors: Ana Campina, Maria Manuela Magalhaes, Eusebio André Machado, Cristina Costa-Lobo

Abstract:

Inclusive school advocates respect for differences, for equal opportunities and for a quality education for all, including for students with special educational needs. In the pursuit of educational equity, guaranteeing equality in access and results, it becomes the responsibility of the school to recognize students' needs, adapting to the various styles and rhythms of learning, ensuring the adequacy of curricula, strategies and resources, materials and humans. This paper presents a set of theoretical reflections in the disciplinary interface between legal and education sciences, school administration and management, with the aim of understand the real inclusion characteristics in a balance with the inclusion policies and the need(s) of an education for Human Rights, especially for diversity. Considering the actual social complexity but the important education instruments and strategies, mostly patented in the policies, this paper aims expose the existing contexts opposed to the laws, policies and inclusion educational needs. More than a single study, this research aims to develop a map of the reality and the guidelines to implement the action. The results point to the usefulness and pertinence of a school in which educational managers, teachers, parents, and students, are involved in the creation, implementation and monitoring of flexible curricula and adapted to the educational needs of students, promoting a collaborative work among teachers. We are then faced with a scenario that points to the need to reflect on the legislation and curricular management of inclusive classes and to operationalize the processes of elaboration of curricular adaptations and differentiation in the classroom. The transdisciplinary is a pedagogic and social education perfect approach using the Human Rights binomio – teaching and learning – supported by the inclusion laws according to the realistic needs for an effective successful society construction.

Keywords: rights, transdisciplinary, inclusion policies, education for diversity

Procedia PDF Downloads 389
1442 ESP: Peculiarities of Teaching Psychology in English to Russian Students

Authors: Ekaterina A. Redkina

Abstract:

The necessity and importance of teaching professionally oriented content in English needs no proof nowadays. Consequently, the ability to share personal ESP teaching experience seems of great importance. This paper is based on the 8-year ESP and EFL teaching experience at the Moscow State Linguistic University, Moscow, Russia, and presents theoretical analysis of specifics, possible problems, and perspectives of teaching Psychology in English to Russian psychology-students. The paper concerns different issues that are common for different ESP classrooms, and familiar to different teachers. Among them are: designing ESP curriculum (for psychologists in this case), finding the balance between content and language in the classroom, main teaching principles (the 4 C’s), the choice of assessment techniques and teaching material. The main objective of teaching psychology in English to Russian psychology students is developing knowledge and skills essential for professional psychologists. Belonging to international professional community presupposes high-level content-specific knowledge and skills, high level of linguistic skills and cross-cultural linguistic ability and finally high level of professional etiquette. Thus, teaching psychology in English pursues 3 main outcomes, such as content, language and professional skills. The paper provides explanation of each of the outcomes. Examples are also given. Particular attention is paid to the lesson structure, its objectives and the difference between a typical EFL and ESP lesson. There is also made an attempt to find commonalities between teaching ESP and CLIL. There is an approach that states that CLIL is more common for schools, while ESP is more common for higher education. The paper argues that CLIL methodology can be successfully used in ESP teaching and that many CLIL activities are also well adapted for professional purposes. The research paper provides insights into the process of teaching psychologists in Russia, real teaching experience and teaching techniques that have proved efficient over time.

Keywords: ESP, CLIL, content, language, psychology in English, Russian students

Procedia PDF Downloads 609
1441 Simultaneous Adsorption and Characterization of NOx and SOx Emissions from Power Generation Plant on Sliced Porous Activated Carbon Prepared by Physical Activation

Authors: Muhammad Shoaib, Hassan M. Al-Swaidan

Abstract:

Air pollution has been a major challenge for the scientists today, due to the release of toxic emissions from various industries like power plants, desalination plants, industrial processes and transportation vehicles. Harmful emissions into the air represent an environmental pressure that reflects negatively on human health and productivity, thus leading to a real loss in the national economy. Variety of air pollutants in the form of carbon oxides, hydrocarbons, nitrogen oxides, sulfur oxides, suspended particulate material etc. are present in air due to the combustion of different types of fuels like crude oil, diesel oil and natural gas. Among various pollutants, NOx and SOx emissions are considered as highly toxic due to its carcinogenicity and its relation with various health disorders. In Kingdom of Saudi Arabia electricity is generated by burning of crude, diesel or natural gas in the turbines of electricity stations. Out of these three, crude oil is used extensively for electricity generation. Due to the burning of the crude oil there are heavy contents of gaseous pollutants like sulfur dioxides (SOx) and nitrogen oxides (NOx), gases which are ultimately discharged in to the environment and is a serious environmental threat. The breakthrough point in case of lab studies using 1 gm of sliced activated carbon adsorbant comes after 20 and 30 minutes for NOx and SOx, respectively, whereas in case of PP8 plant breakthrough point comes in seconds. The saturation point in case of lab studies comes after 100 and 120 minutes and for actual PP8 plant it comes after 60 and 90 minutes for NOx and SOx adsorption, respectively. Surface characterization of NOx and SOx adsorption on SAC confirms the presence of peaks in the FT-IR spectrum. CHNS study verifies that the SAC is suitable for NOx and SOx along with some other C and H containing compounds coming out from stack emission stream from the turbines of a power plant.

Keywords: activated carbon, flue gases, NOx and SOx adsorption, physical activation, power plants

Procedia PDF Downloads 347
1440 Heliport Remote Safeguard System Based on Real-Time Stereovision 3D Reconstruction Algorithm

Authors: Ł. Morawiński, C. Jasiński, M. Jurkiewicz, S. Bou Habib, M. Bondyra

Abstract:

With the development of optics, electronics, and computers, vision systems are increasingly used in various areas of life, science, and industry. Vision systems have a huge number of applications. They can be used in quality control, object detection, data reading, e.g., QR-code, etc. A large part of them is used for measurement purposes. Some of them make it possible to obtain a 3D reconstruction of the tested objects or measurement areas. 3D reconstruction algorithms are mostly based on creating depth maps from data that can be acquired from active or passive methods. Due to the specific appliance in airfield technology, only passive methods are applicable because of other existing systems working on the site, which can be blinded on most spectral levels. Furthermore, reconstruction is required to work long distances ranging from hundreds of meters to tens of kilometers with low loss of accuracy even with harsh conditions such as fog, rain, or snow. In response to those requirements, HRESS (Heliport REmote Safeguard System) was developed; which main part is a rotational head with a two-camera stereovision rig gathering images around the head in 360 degrees along with stereovision 3D reconstruction and point cloud combination. The sub-pixel analysis introduced in the HRESS system makes it possible to obtain an increased distance measurement resolution and accuracy of about 3% for distances over one kilometer. Ultimately, this leads to more accurate and reliable measurement data in the form of a point cloud. Moreover, the program algorithm introduces operations enabling the filtering of erroneously collected data in the point cloud. All activities from the programming, mechanical and optical side are aimed at obtaining the most accurate 3D reconstruction of the environment in the measurement area.

Keywords: airfield monitoring, artificial intelligence, stereovision, 3D reconstruction

Procedia PDF Downloads 125
1439 Polymorphisms of the UM Genotype of CYP2C19*17 in Thais Taking Medical Cannabis

Authors: Athicha Cherdpunt, Patompong Satapornpong

Abstract:

The medical cannabis is made up of components also known as cannabinoids, which consists of two ingredients which are Δ9-tetrahydrocannabinol (THC) and cannabidiol (CBD). Interestingly, the Cannabinoid can be used for many treatments such as chemotherapy, including nausea and vomiting, cachexia, anorexia nervosa, spinal cord injury and disease, epilepsy, pain, and many others. However, the adverse drug reactions (ADRs) of THC can cause sedation, anxiety, dizziness, appetite stimulation and impairments in driving and cognitive function. Furthermore, genetic polymorphisms of CYP2C9, CYP2C19 and CYP3A4 influenced the THC metabolism and might be a cause of ADRs. Particularly, CYP2C19*17 allele increases gene transcription and therefore results in ultra-rapid metabolizer phenotype (UM). The aim of this study, is to investigate the frequency of CYP2C19*17 alleles in Thai patients who have been treated with medical cannabis. We prospectively enrolled 60 Thai patients who were treated with medical cannabis and clinical data from College of Pharmacy, Rangsit University. DNA of each patient was isolated from EDTA blood, using the Genomic DNA Mini Kit. CYP2C19*17 genotyping was conducted using the real time-PCR ViiA7 (ABI, Foster City, CA, USA). 30 patients with medical cannabis-induced ADRs group, 20 (67%) were female, and 10 (33%) were male, with an age range of 30-69 years. On the other hand, 30 patients without medical cannabis-induced ADRs (control group) consist of 17 (57%) female and 13 (43%) male. The most ADRs for medical cannabis treatment in the case group were dry mouth and dry throat (77%), tachycardia (70%), nausea (30%) and arrhythmia(10%). Accordingly, the case group carried CYP2C19*1/*1 (normal metabolizer) approximately 93%, while 7% patients carrying CYP2C19*1/*17 (ultra rapid metabolizers) exhibited in this group. Meanwhile, we found 90% of CYP2C19*1/*1 and 10% of CYP2C19*1/*17 in control group. In this study, we identified the frequency of CYP2C19*17 allele in Thai population which will support the pharmacogenetics biomarkers for screening and avoid ADRs of medical cannabis treatment.

Keywords: CYP2C19, allele frequency, ultra rapid metabolizer, medical cannabis

Procedia PDF Downloads 109
1438 Green Housing Projects in Egypt: A Futuristic Approach

Authors: Shimaa Mahmoud Ali Ahmed, Boshra Tawfek El-Shreef

Abstract:

Sustainable development has become an important concern worldwide, and climate change has become a global threat. Some of these affect how we approach environmental issues — and how we should approach them. Environmental aspects have an important impact on the built environment, that’s why knowledge about Green Building and Green Construction become a vital dimension of urban sustainable development to face the challenges of climate change. There are several levels of green buildings, from energy-efficient lighting to 100% eco-friendly construction; the concept of green buildings in Egypt is still a rare occurrence, with the concept being relatively new to the market. There are several projects on the ground that currently employing sustainable and green solutions to some extent, some of them achieve a limit of success and others fail to employ the new solutions. The market and the cost as well, are great factors. From the last century, green architecture and environmental sustainability become a famous trend that all the researchers like to follow. Nowadays, the trend towards green has shifted to housing and real estate projects. While the environmental aspects are the key to achieve green buildings, the economic benefits, and the market forces are considered as big challenges. The paper assumes that some appropriate environmental treatments could be added to the applied prototype of the governmental social housing projects in Egypt to achieve better environmental solutions. The aim of the research is to get housing projects in Egypt closer to the track of sustainable and green buildings, through making a local future proposal to be integrated into the current policies. The proposed model is based upon adding some appropriate, cheap environmental modifications to the prototype of the Ministry of Housing, Infrastructure, and New Urban Communities. The research is based on an analytical, comparative analytical, and inductive approach to study and analyze the housing projects in Egypt and the possibilities of integrating green techniques into it.

Keywords: green buildings, urban sustainability, housing projects, sustainable development goals, Egypt 2030

Procedia PDF Downloads 137
1437 Early Impact Prediction and Key Factors Study of Artificial Intelligence Patents: A Method Based on LightGBM and Interpretable Machine Learning

Authors: Xingyu Gao, Qiang Wu

Abstract:

Patents play a crucial role in protecting innovation and intellectual property. Early prediction of the impact of artificial intelligence (AI) patents helps researchers and companies allocate resources and make better decisions. Understanding the key factors that influence patent impact can assist researchers in gaining a better understanding of the evolution of AI technology and innovation trends. Therefore, identifying highly impactful patents early and providing support for them holds immeasurable value in accelerating technological progress, reducing research and development costs, and mitigating market positioning risks. Despite the extensive research on AI patents, accurately predicting their early impact remains a challenge. Traditional methods often consider only single factors or simple combinations, failing to comprehensively and accurately reflect the actual impact of patents. This paper utilized the artificial intelligence patent database from the United States Patent and Trademark Office and the Len.org patent retrieval platform to obtain specific information on 35,708 AI patents. Using six machine learning models, namely Multiple Linear Regression, Random Forest Regression, XGBoost Regression, LightGBM Regression, Support Vector Machine Regression, and K-Nearest Neighbors Regression, and using early indicators of patents as features, the paper comprehensively predicted the impact of patents from three aspects: technical, social, and economic. These aspects include the technical leadership of patents, the number of citations they receive, and their shared value. The SHAP (Shapley Additive exPlanations) metric was used to explain the predictions of the best model, quantifying the contribution of each feature to the model's predictions. The experimental results on the AI patent dataset indicate that, for all three target variables, LightGBM regression shows the best predictive performance. Specifically, patent novelty has the greatest impact on predicting the technical impact of patents and has a positive effect. Additionally, the number of owners, the number of backward citations, and the number of independent claims are all crucial and have a positive influence on predicting technical impact. In predicting the social impact of patents, the number of applicants is considered the most critical input variable, but it has a negative impact on social impact. At the same time, the number of independent claims, the number of owners, and the number of backward citations are also important predictive factors, and they have a positive effect on social impact. For predicting the economic impact of patents, the number of independent claims is considered the most important factor and has a positive impact on economic impact. The number of owners, the number of sibling countries or regions, and the size of the extended patent family also have a positive influence on economic impact. The study primarily relies on data from the United States Patent and Trademark Office for artificial intelligence patents. Future research could consider more comprehensive data sources, including artificial intelligence patent data, from a global perspective. While the study takes into account various factors, there may still be other important features not considered. In the future, factors such as patent implementation and market applications may be considered as they could have an impact on the influence of patents.

Keywords: patent influence, interpretable machine learning, predictive models, SHAP

Procedia PDF Downloads 50
1436 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling

Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed

Abstract:

The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.

Keywords: streamflow, neural network, optimisation, algorithm

Procedia PDF Downloads 152
1435 Trends in the Incidence of Bloodstream Infections in Patients with Hematological Malignancies in the Period 1991–2012

Authors: V. N. Chebotkevich, E. E. Schetinkina, V. V. Burylev, E. I. Kaytandzhan, N. P. Stizhak

Abstract:

Objective: Blood stream infections (BSI) are severe, life-threatening illness for immuno compromised patients with hematological malignancies. We report the trend in blood-stream infections in this group of patients in the period 1991-2013. Methods: A total of 4742 blood samples investigated. All blood cultures were incubated in a continuous monitoring system for 7 days before discarding negative. On signaled positive, organism was identified by conventional methods. The Real-time polymerase chain reaction (PCR) was used for the indication of human herpes virus 6 (HHV-6), Cytomegalovirus (CMV) and Epstein-Barr virus (EBV). Results: Between 1991 and 2001 the incidence of Gram-positive bacteria (Staphylococcus epidermidis, Staphylococcus aureus) being the most common germs isolated (70,9%) were as Gram-negative rods (Escherichia coli, Klebsiella spp., Pseudomonas spp.) – 29,1%. In next decade 2002-2012 the number of Gram-negative bacteria was increased up to 40.2%. It is shown that the incidence of bacteremia was significantly more frequent at the background of detectable Cytomegalovirus and Epstein-Barr virus-specific DNA in blood. Over recent years, an increased frequency of micro mycetes was registered in blood of the patients with hematological malignancies (Candida spp. was predominant). Conclusion: Accurate and timely detection of BSI is important in determining appropriate treatment of infectious complications in patients with hematological malignancies. The isolation of Staphylococcus epidermidis from blood cultures remains a clinical dilemma for physicians and microbiologists. But in many cases this agent is of the clinical significance in immunocompromised patients with hematological malignancies. The role of CMV and EBV in development of bacteremia was demonstrated.

Keywords: infectious complications, blood stream infections, bacteremia, hemoblastosis

Procedia PDF Downloads 352
1434 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 316
1433 Wastewater Treatment Using Ternary Hybrid Advanced Oxidation Processes Through Heterogeneous Fenton

Authors: komal verma, V. S. Moholkar

Abstract:

In this current study, the challenge of effectively treating and mineralizing industrial wastewater prior to its discharge into natural water bodies, such as rivers and lakes, is being addressed. Particularly, the focus is on the wastewater produced by chemical process industries, including refineries, petrochemicals, fertilizer, pharmaceuticals, pesticides, and dyestuff industries. These wastewaters often contain stubborn organic pollutants that conventional techniques, such as microbial processes cannot efficiently degrade. To tackle this issue, a ternary hybrid technique comprising of adsorption, heterogeneous Fenton process, and sonication has been employed. The study aims to evaluate the effectiveness of this approach for treating and mineralizing wastewater from a fertilizer industry located in Northeast India. The study comprises several key components, starting with the synthesis of the Fe3O4@AC nanocomposite using the co-precipitation method. The nanocomposite is then subjected to comprehensive characterization through various standard techniques, including FTIR, FE-SEM, EDX, TEM, BET surface area analysis, XRD, and magnetic property determination using VSM. Next, the process parameters of wastewater treatment are statistically optimized, focusing on achieving a high level of COD (Chemical Oxygen Demand) removal as the response variable. The Fe3O4@AC nanocomposite's adsorption characteristics and kinetics are also assessed in detail. The remarkable outcome of this study is the successful application of the ternary hybrid technique, combining adsorption, Fenton process, and sonication. This approach proves highly effective, leading to nearly complete mineralization (or TOC removal) of the fertilizer industry wastewater. The results highlight the potential of the Fe3O4@AC nanocomposite and the ternary hybrid technique as a promising solution for tackling challenging wastewater pollutants from various chemical process industries. This paper reports investigations in the mineralization of industrial wastewater (COD = 3246 mg/L, TOC = 2500 mg/L) using a ternary (ultrasound + Fenton + adsorption) hybrid advanced oxidation process. Fe3O4 decorated activated charcoal (Fe3O4@AC) nanocomposites (surface area = 538.88 m2/g; adsorption capacity = 294.31 mg/g) were synthesized using co-precipitation. The wastewater treatment process was optimized using central composite statistical design. At optimum conditions, viz. pH = 4.2, H2O2 loading = 0.71 M, adsorbent dose = 0.34 g/L, reduction in COD and TOC of wastewater were 94.75% and 89%, respectively. This result results from synergistic interactions among the adsorption of pollutants onto activated charcoal and surface Fenton reactions induced due to the leaching of Fe2+/Fe3+ ions from the Fe3O4 nanoparticles. Micro-convection generated due to sonication assisted faster mass transport (adsorption/desorption) of pollutants between Fe3O4@AC nanocomposite and the solution. The net result of this synergism was high interactions and reactions among and radicals and pollutants that resulted in the effective mineralization of wastewater. The Fe3O4@AC showed excellent recovery (> 90 wt%) and reusability (> 90% COD removal) in 5 successive cycles of treatment. LC-MS analysis revealed effective (> 50%) degradation of more than 25 significant contaminants (in the form of herbicides and pesticides) after the treatment with ternary hybrid AOP. Similarly, the toxicity analysis test using the seed germination technique revealed ~ 60% reduction in the toxicity of the wastewater after treatment.

Keywords: chemical oxygen demand (cod), fe3o4@ac nanocomposite, kinetics, lc-ms, rsm, toxicity

Procedia PDF Downloads 72
1432 Open Reading Frame Marker-Based Capacitive DNA Sensor for Ultrasensitive Detection of Escherichia coli O157:H7 in Potable Water

Authors: Rehan Deshmukh, Sunil Bhand, Utpal Roy

Abstract:

We report the label-free electrochemical detection of Escherichia coli O157:H7 (ATCC 43895) in potable water using a DNA probe as a sensing molecule targeting the open reading frame marker. Indium tin oxide (ITO) surface was modified with organosilane and, glutaraldehyde was applied as a linker to fabricate the DNA sensor chip. Non-Faradic electrochemical impedance spectroscopy (EIS) behavior was investigated at each step of sensor fabrication using cyclic voltammetry, impedance, phase, relative permittivity, capacitance, and admittance. Atomic force microscopy (AFM) and scanning electron microscopy (SEM) revealed significant changes in surface topographies of DNA sensor chip fabrication. The decrease in the percentage of pinholes from 2.05 (Bare ITO) to 1.46 (after DNA hybridization) suggested the capacitive behavior of the DNA sensor chip. The results of non-Faradic EIS studies of DNA sensor chip showed a systematic declining trend of the capacitance as well as the relative permittivity upon DNA hybridization. DNA sensor chip exhibited linearity in 0.5 to 25 pg/10mL for E. coli O157:H7 (ATCC 43895). The limit of detection (LOD) at 95% confidence estimated by logistic regression was 0.1 pg DNA/10mL of E. coli O157:H7 (equivalent to 13.67 CFU/10mL) with a p-value of 0.0237. Moreover, the fabricated DNA sensor chip used for detection of E. coli O157:H7 showed no significant cross-reactivity with closely and distantly related bacteria such as Escherichia coli MTCC 3221, Escherichia coli O78:H11 MTCC 723 and Bacillus subtilis MTCC 736. Consequently, the results obtained in our study demonstrated the possible application of developed DNA sensor chips for E. coli O157:H7 ATCC 43895 in real water samples as well.

Keywords: capacitance, DNA sensor, Escherichia coli O157:H7, open reading frame marker

Procedia PDF Downloads 144
1431 Visualization Tool for EEG Signal Segmentation

Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh

Abstract:

This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.

Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation

Procedia PDF Downloads 397
1430 Numerical Investigation of Turbulent Inflow Strategy in Wind Energy Applications

Authors: Arijit Saha, Hassan Kassem, Leo Hoening

Abstract:

Ongoing climate change demands the increasing use of renewable energies. Wind energy plays an important role in this context since it can be applied almost everywhere in the world. To reduce the costs of wind turbines and to make them more competitive, simulations are very important since experiments are often too costly if at all possible. The wind turbine on a vast open area experiences the turbulence generated due to the atmosphere, so it was of utmost interest from this research point of view to generate the turbulence through various Inlet Turbulence Generation methods like Precursor cyclic and Kaimal Spectrum Exponential Coherence (KSEC) in the computational simulation domain. To be able to validate computational fluid dynamic simulations of wind turbines with the experimental data, it is crucial to set up the conditions in the simulation as close to reality as possible. This present work, therefore, aims at investigating the turbulent inflow strategy and boundary conditions of KSEC and providing a comparative analysis alongside the Precursor cyclic method for Large Eddy Simulation within the context of wind energy applications. For the generation of the turbulent box through KSEC method, firstly, the constrained data were collected from an auxiliary channel flow, and later processing was performed with the open-source tool PyconTurb, whereas for the precursor cyclic, only the data from the auxiliary channel were sufficient. The functionality of these methods was studied through various statistical properties such as variance, turbulent intensity, etc with respect to different Bulk Reynolds numbers, and a conclusion was drawn on the feasibility of KSEC method. Furthermore, it was found necessary to verify the obtained data with DNS case setup for its applicability to use it as a real field CFD simulation.

Keywords: Inlet Turbulence Generation, CFD, precursor cyclic, KSEC, large Eddy simulation, PyconTurb

Procedia PDF Downloads 96
1429 Effect of Microstructure on Wear Resistance of Polycrystalline Diamond Composite Cutter of Bit

Authors: Fanyuan Shao, Wei Liu, Deli Gao

Abstract:

Polycrystalline diamond composite (PDC) cutter is made of diamond powder as raw material, cobalt metal or non-metallic elements as a binder, mixed with WC cemented carbide matrix assembly, through high temperature and high-pressure sintering. PDC bits with PDC cutters are widely used in oil and gas drilling because of their high hardness, good wear resistance and excellent impact toughness. And PDC cutter is the main cutting tool of bit, which seriously affects the service of the PDC bit. The wear resistance of the PDC cutter is measured by cutting granite with a vertical turret lathe (VTL). This experiment can achieve long-distance cutting to obtain the relationship between the wear resistance of the PDC cutter and cutting distance, which is more closely to the real drilling situation. Load cell and 3D optical profiler were used to obtain the value of cutting forces and wear area, respectively, which can also characterize the damage and wear of the PDC cutter. PDC cutters were cut via electrical discharge machining (EDM) and then flattened and polished. A scanning electron microscope (SEM) was used to observe the distribution of binder cobalt and the size of diamond particles in a diamond PDC cutter. The cutting experimental results show that the wear area of the PDC cutter has a good linear relationship with the cutting distance. Simultaneously, the larger the wear area is and the greater the cutting forces are required to maintain the same cutting state. The size and distribution of diamond particles in the polycrystalline diamond layer have a great influence on the wear resistance of the diamond layer. And PDC cutter with fine diamond grains shows more wear resistance than that with coarse grains. The deep leaching process is helpful to reduce the effect of binder cobalt on the wear resistance of the polycrystalline diamond layer. The experimental study can provide an important basis for the application of PDC cutters in oil and gas drilling.

Keywords: polycrystalline diamond compact, scanning electron microscope, wear resistance, cutting distance

Procedia PDF Downloads 198
1428 Drippers Scaling Inhibition of the Localized Irrigation System by Green Inhibitors Based on Plant Extracts

Authors: Driouiche Ali, Karmal Ilham

Abstract:

The Agadir region is characterized by a dry climate, ranging from arid attenuated by oceanic influences to hyper-arid. The water mobilized in the agricultural sector of greater Agadir is 95% of underground origin and comes from the water table of Chtouka. The rest represents the surface waters of the Youssef Ben Tachfine dam. These waters are intended for the irrigation of 26880 hectares of modern agriculture. More than 120 boreholes and wells are currently exploited. Their depth varies between 10 m and 200 m and the unit flow rates of the boreholes are 5 to 50 l/s. A drop in the level of the water table of about 1.5 m/year, on average, has been observed during the last five years. Farmers are thus called upon to improve irrigation methods. Thus, localized or drip irrigation is adopted to allow rational use of water. The importance of this irrigation system is due to the fact that water is applied directly to the root zone and its compatibility with fertilization. However, this irrigation system faces a thorny problem which is the clogging of pipes and drippers. This leads to a lack of uniformity of irrigation over time. This so-called scaling phenomenon, the consequences of which are harmful (cleaning or replacement of pipes), leads to considerable unproductive expenditure. The objective set by this work is the search for green inhibitors likely to prevent this phenomenon of scaling. This study requires a better knowledge of these waters, their physico-chemical characteristics and their scaling power. Thus, using the "LCGE" controlled degassing technique, we initially evaluated, on pure calco-carbonic water at 30°F, the scaling-inhibiting power of some available plant extracts in our region of Souss-Massa. We then carried out a comparative study of the efficacy of these green inhibitors. The action of the most effective green inhibitor on real agricultural waters was then studied.

Keywords: green inhibitors, localized irrigation, plant extracts, scaling inhibition

Procedia PDF Downloads 82
1427 Passive Vibration Isolation Analysis and Optimization for Mechanical Systems

Authors: Ozan Yavuz Baytemir, Ender Cigeroglu, Gokhan Osman Ozgen

Abstract:

Vibration is an important issue in the design of various components of aerospace, marine and vehicular applications. In order not to lose the components’ function and operational performance, vibration isolation design involving the optimum isolator properties selection and isolator positioning processes appear to be a critical study. Knowing the growing need for the vibration isolation system design, this paper aims to present two types of software capable of implementing modal analysis, response analysis for both random and harmonic types of excitations, static deflection analysis, Monte Carlo simulations in addition to study of parameter and location optimization for different types of isolation problem scenarios. Investigating the literature, there is no such study developing a software-based tool that is capable of implementing all those analysis, simulation and optimization studies in one platform simultaneously. In this paper, the theoretical system model is generated for a 6-DOF rigid body. The vibration isolation system of any mechanical structure is able to be optimized using hybrid method involving both global search and gradient-based methods. Defining the optimization design variables, different types of optimization scenarios are listed in detail. Being aware of the need for a user friendly vibration isolation problem solver, two types of graphical user interfaces (GUIs) are prepared and verified using a commercial finite element analysis program, Ansys Workbench 14.0. Using the analysis and optimization capabilities of those GUIs, a real application used in an air-platform is also presented as a case study at the end of the paper.

Keywords: hybrid optimization, Monte Carlo simulation, multi-degree-of-freedom system, parameter optimization, location optimization, passive vibration isolation analysis

Procedia PDF Downloads 565
1426 Gnss Aided Photogrammetry for Digital Mapping

Authors: Muhammad Usman Akram

Abstract:

This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.

Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry

Procedia PDF Downloads 32
1425 A Time and Frequency Dependent Study of Low Intensity Microwave Radiation Induced Endoplasmic Reticulum Stress and Alteration of Autophagy in Rat Brain

Authors: Ranjeet Kumar, Pravin Suryakantrao Deshmukh, Sonal Sharma, Basudev Banerjee

Abstract:

With the tremendous increase in exposure to radiofrequency microwaves emitted by mobile phones, globally public awareness has grown with regard to the potential health hazards of microwaves on the nervous system in the brain. India alone has more than one billion mobile users out of 4.3 billion globally. Our studies have suggested that radio frequency able to affect neuronal alterations in the brain, and hence, affecting cognitive behaviour. However, adverse effect of low-intensity microwave exposure with endoplasmic reticulum stress and autophagy has not been evaluated yet. In this study, we explore whether low-intensity microwave induces endoplasmic reticulum stress and autophagy with varying frequency and time duration in Wistar rat. Ninety-six male Wistar rat were divided into 12 groups of 8 rats each. We studied at 900 MHz, 1800 MHz, and 2450 MHz frequency with reference to sham-exposed group. At the end of the exposure, the rats were sacrificed to collect brain tissue and expression of CHOP, ATF-4, XBP-1, Bcl-2, Bax, LC3 and Atg-4 gene was analysed by real-time PCR. Significant fold change (p < 0.05) of gene expression was found in all groups of 1800 MHz and 2450 MHz exposure group in comparison to sham exposure group. In conclusion, the microwave exposure able to induce ER stress and modulate autophagy. ER (endoplasmic reticulum) stress and autophagy vary with increasing frequency as well as the duration of exposure. Our results suggested that microwave exposure is harmful to neuronal health as it induces ER stress and hampers autophagy in neuron cells and thereby increasing the neuron degeneration which impairs cognitive behaviour of experimental animals.

Keywords: autophagy, ER stress, microwave, nervous system, rat

Procedia PDF Downloads 131