Search results for: real gas
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5236

Search results for: real gas

706 Impact of Alternative Fuel Feeding on Fuel Cell Performance and Durability

Authors: S. Rodosik, J. P. Poirot-Crouvezier, Y. Bultel

Abstract:

With the expansion of the hydrogen economy, Proton Exchange Membrane Fuel Cell (PEMFC) systems are often presented as promising energy converters suitable for transport applications. However, reaching a durability of 5000 h recommended by the U.S. Department of Energy and decreasing system cost are still major hurdles to their development. In order to increase the system efficiency and simplify the system without affecting the fuel cell lifetime, an architecture called alternative fuel feeding has been developed. It consists in a fuel cell stack divided into two parts, alternatively fed, implemented on a 5-kW system for real scale testing. The operation strategy can be considered close to Dead End Anode (DEA) with specific modifications to avoid water and nitrogen accumulation in the cells. The two half-stacks are connected in series to enable each stack to be alternatively fed. Water and nitrogen accumulated can be shifted from one half-stack to the other one according to the alternative feeding frequency. Thanks to the homogenization of water vapor along the stack, water management was improved. The operating conditions obtained at system scale are close to recirculation without the need of a pump or an ejector. In a first part, a performance comparison with the DEA strategy has been performed. At high temperature and low pressure (80°C, 1.2 bar), performance of alternative fuel feeding was higher, and the system efficiency increased. In a second part, in order to highlight the benefits of the architecture on the fuel cell lifetime, two durability tests, lasting up to 1000h, have been conducted. A test on the 5-kW system has been compared to a reference test performed on a test bench with a shorter stack, conducted with well-controlled operating parameters and flow-through hydrogen strategy. The durability test is based upon the Fuel Cell Dynamic Load Cycle (FC-DLC) protocol but adapted to the system limitations: without OCV steps and a maximum current density of 0.4 A/cm². In situ local measurements with a segmented S++® plate performed all along the tests, showed a more homogeneous distribution of the current density with alternative fuel feeding than in flow-through strategy. Tests performed in this work enabled the understanding of this architecture advantages and drawbacks. Alternative fuel feeding architecture appeared to be a promising solution to ensure the humidification function at the anode side with a simplified fuel cell system.

Keywords: automotive conditions, durability, fuel cell system, proton exchange membrane fuel cell, stack architecture

Procedia PDF Downloads 141
705 Teaching Material, Books, Publications versus the Practice: Myths and Truths about Installation and Use of Downhole Safety Valve

Authors: Robson da Cunha Santos, Caio Cezar R. Bonifacio, Diego Mureb Quesada, Gerson Gomes Cunha

Abstract:

The paper is related to the safety of oil wells and environmental preservation on the planet, because they require great attention and commitment from oil companies and people who work with these equipments. This must occur from drilling the well until it is abandoned in order to safeguard the environment and prevent possible damage. The project had as main objective the constitution resulting from comparatives made among books, articles and publications with information gathered in technical visits to operational bases of Petrobras. After the visits, the information from methods of utilization and present managements, which were not available before, became available to the general audience. As a result, it is observed a huge flux of incorrect and out-of-date information that comprehends not only bibliographic archives, but also academic resources and materials. During the gathering of more in-depth information on the manufacturing, assembling, and use aspects of DHSVs, several issues that were previously known as correct, customary issues were discovered to be uncertain and outdated. Information of great importance resulted in affirmations about subjects as the depth of the valve installation that was before installed to 30 meters from the seabed (mud line). Despite this, the installation should vary in conformity to the ideal depth to escape from area with the biggest tendency to hydrates formation according to the temperature and pressure. Regarding to valves with nitrogen chamber, in accordance with books, they have their utilization linked to water line ≥ 700 meters, but in Brazilian exploratory fields, their use occurs from 600 meters of water line. The valves used in Brazilian fields are able to be inserted to the production column and self-equalizing, but the use of screwed valve in the column of production and equalizing is predominant. Although these valves are more expensive to acquire, they are more reliable, efficient, with a bigger shelf life and they do not cause restriction to the fluid flux. It follows that based on researches and theoretical information confronted to usual forms used in fields, the present project is important and relevant. This project will be used as source of actualization and information equalization that connects academic environment and real situations in exploratory situations and also taking into consideration the enrichment of precise and easy to understand information to future researches and academic upgrading.

Keywords: down hole safety valve, security devices, installation, oil-wells

Procedia PDF Downloads 267
704 Photocatalytic Disintegration of Naphthalene and Naphthalene Similar Compounds in Indoors Air

Authors: Tobias Schnabel

Abstract:

Naphthalene and naphthalene similar compounds are a common problem in the indoor air of buildings from the 1960s and 1970s in Germany. Often tar containing roof felt was used under the concrete floor to prevent humidity to come through the floor. This tar containing roof felt has high concentrations of PAH (Polycyclic aromatic hydrocarbon) and naphthalene. Naphthalene easily evaporates and contaminates the indoor air. Especially after renovations and energetically modernization of the buildings, the naphthalene concentration rises because no forced air exchange can happen. Because of this problem, it is often necessary to change the floors after renovation of the buildings. The MFPA Weimar (Material research and testing facility) developed in cooperation a project with LEJ GmbH and Reichmann Gebäudetechnik GmbH. It is a technical solution for the disintegration of naphthalene in naphthalene, similar compounds in indoor air with photocatalytic reforming. Photocatalytic systems produce active oxygen species (hydroxyl radicals) through trading semiconductors on a wavelength of their bandgap. The light energy separates the charges in the semiconductor and produces free electrons in the line tape and defect electrons. The defect electrons can react with hydroxide ions to hydroxyl radicals. The produced hydroxyl radicals are a strong oxidation agent, and can oxidate organic matter to carbon dioxide and water. During the research, new titanium oxide catalysator surface coatings were developed. This coating technology allows the production of very porous titan oxide layer on temperature stable carrier materials. The porosity allows the naphthalene to get easily absorbed by the surface coating, what accelerates the reaction of the heterogeneous photocatalysis. The photocatalytic reaction is induced by high power and high efficient UV-A (ultra violet light) Leds with a wavelength of 365nm. Various tests in emission chambers and on the reformer itself show that a reduction of naphthalene in important concentrations between 2 and 250 µg/m³ is possible. The disintegration rate was at least 80%. To reduce the concentration of naphthalene from 30 µg/m³ to a level below 5 µg/m³ in a usual 50 ² classroom, an energy of 6 kWh is needed. The benefits of the photocatalytic indoor air treatment are that every organic compound in the air can be disintegrated and reduced. The use of new photocatalytic materials in combination with highly efficient UV leds make a safe and energy efficient reduction of organic compounds in indoor air possible. At the moment the air cleaning systems take the step from prototype stage into the usage in real buildings.

Keywords: naphthalene, titandioxide, indoor air, photocatalysis

Procedia PDF Downloads 142
703 Issues of Accounting of Lease and Revenue according to International Financial Reporting Standards

Authors: Nadezhda Kvatashidze, Elena Kharabadze

Abstract:

It is broadly known that lease is a flexible means of funding enterprises. Lease reduces the risk related to access and possession of assets, as well as obtainment of funding. Therefore, it is important to refine lease accounting. The lease accounting regulations under the applicable standard (International Accounting Standards 17) make concealment of liabilities possible. As a result, the information users get inaccurate and incomprehensive information and have to resort to an additional assessment of the off-balance sheet lease liabilities. In order to address the problem, the International Financial Reporting Standards Board decided to change the approach to lease accounting. With the deficiencies of the applicable standard taken into account, the new standard (IFRS 16 ‘Leases’) aims at supplying appropriate and fair lease-related information to the users. Save certain exclusions; the lessee is obliged to recognize all the lease agreements in its financial report. The approach was determined by the fact that under the lease agreement, rights and obligations arise by way of assets and liabilities. Immediately upon conclusion of the lease agreement, the lessee takes an asset into its disposal and assumes the obligation to effect the lease-related payments in order to meet the recognition criteria defined by the Conceptual Framework for Financial Reporting. The payments are to be entered into the financial report. The new lease accounting standard secures supply of quality and comparable information to the financial information users. The International Accounting Standards Board and the US Financial Accounting Standards Board jointly developed IFRS 15: ‘Revenue from Contracts with Customers’. The standard allows the establishment of detailed revenue recognition practical criteria such as identification of the performance obligations in the contract, determination of the transaction price and its components, especially price variable considerations and other important components, as well as passage of control over the asset to the customer. IFRS 15: ‘Revenue from Contracts with Customers’ is very similar to the relevant US standards and includes requirements more specific and consistent than those of the standards in place. The new standard is going to change the recognition terms and techniques in the industries, such as construction, telecommunications (mobile and cable networks), licensing (media, science, franchising), real property, software etc.

Keywords: assessment of the lease assets and liabilities, contractual liability, division of contract, identification of contracts, contract price, lease identification, lease liabilities, off-balance sheet, transaction value

Procedia PDF Downloads 318
702 A Comparative Study of the Impact of Membership in International Climate Change Treaties and the Environmental Kuznets Curve (EKC) in Line with Sustainable Development Theories

Authors: Mojtaba Taheri, Saied Reza Ameli

Abstract:

In this research, we have calculated the effect of membership in international climate change treaties for 20 developed countries based on the human development index (HDI) and compared this effect with the process of pollutant reduction in the Environmental Kuznets Curve (EKC) theory. For this purpose, the data related to The real GDP per capita with 2010 constant prices is selected from the World Development Indicators (WDI) database. Ecological Footprint (ECOFP) is the amount of biologically productive land needed to meet human needs and absorb carbon dioxide emissions. It is measured in global hectares (gha), and the data retrieved from the Global Ecological Footprint (2021) database will be used, and we will proceed by examining step by step and performing several series of targeted statistical regressions. We will examine the effects of different control variables, including Energy Consumption Structure (ECS) will be counted as the share of fossil fuel consumption in total energy consumption and will be extracted from The United States Energy Information Administration (EIA) (2021) database. Energy Production (EP) refers to the total production of primary energy by all energy-producing enterprises in one country at a specific time. It is a comprehensive indicator that shows the capacity of energy production in the country, and the data for its 2021 version, like the Energy Consumption Structure, is obtained from (EIA). Financial development (FND) is defined as the ratio of private credit to GDP, and to some extent based on the stock market value, also as a ratio to GDP, and is taken from the (WDI) 2021 version. Trade Openness (TRD) is the sum of exports and imports of goods and services measured as a share of GDP, and we use the (WDI) data (2021) version. Urbanization (URB) is defined as the share of the urban population in the total population, and for this data, we used the (WDI) data source (2021) version. The descriptive statistics of all the investigated variables are presented in the results section. Related to the theories of sustainable development, Environmental Kuznets Curve (EKC) is more significant in the period of study. In this research, we use more than fourteen targeted statistical regressions to purify the net effects of each of the approaches and examine the results.

Keywords: climate change, globalization, environmental economics, sustainable development, international climate treaty

Procedia PDF Downloads 71
701 Green Ports: Innovation Adopters or Innovation Developers

Authors: Marco Ferretti, Marcello Risitano, Maria Cristina Pietronudo, Lina Ozturk

Abstract:

A green port is the result of a sustainable long-term strategy adopted by an entire port infrastructure, therefore by the set of actors involved in port activities. The strategy aims to realise the development of sustainable port infrastructure focused on the reduction of negative environmental impacts without jeopardising economic growth. Green technology represents the core tool to implement sustainable solutions, however, they are not a magic bullet. Ports have always been integrated in the local territory affecting the environment in which they operate, therefore, the sustainable strategy should fit with the entire local systems. Therefore, adopting a sustainable strategy means to know how to involve and engage a wide stakeholders’ network (industries, production, markets, citizens, and public authority). The existing research on the topic has not well integrated this perspective with those of sustainability. Research on green ports have mixed the sustainability aspects with those on the maritime industry, neglecting dynamics that lead to the development of the green port phenomenon. We propose an analysis of green ports adopting the lens of ecosystem studies in the field of management. The ecosystem approach provides a way to model relations that enable green solutions and green practices in a port ecosystem. However, due to the local dimension of a port and the port trend on innovation, i.e., sustainable innovation, we draw to a specific concept of ecosystem, those on local innovation systems. More precisely, we explore if a green port is a local innovation system engaged in developing sustainable innovation with a large impact on the territory or merely an innovation adopter. To address this issue, we adopt a comparative case study selecting two innovative ports in Europe: Rotterdam and Genova. The case study is a research method focused on understanding the dynamics in a specific situation and can be used to provide a description of real circumstances. Preliminary results show two different approaches in supporting sustainable innovation: one represented by Rotterdam, a pioneer in competitiveness and sustainability, and the second one represented by Genoa, an example of technology adopter. The paper intends to provide a better understanding of how sustainable innovations are developed and in which manner a network of port and local stakeholder support this process. Furthermore, it proposes a taxonomy of green ports as developers and adopters of sustainable innovation, suggesting also best practices to model relationships that enable the port ecosystem in applying a sustainable strategy.

Keywords: green port, innovation, sustainability, local innovation systems

Procedia PDF Downloads 120
700 Long Short-Term Memory Stream Cruise Control Method for Automated Drift Detection and Adaptation

Authors: Mohammad Abu-Shaira, Weishi Shi

Abstract:

Adaptive learning, a commonly employed solution to drift, involves updating predictive models online during their operation to react to concept drifts, thereby serving as a critical component and natural extension for online learning systems that learn incrementally from each example. This paper introduces LSTM-SCCM “Long Short-Term Memory Stream Cruise Control Method”, a drift adaptation-as-a-service framework for online learning. LSTM-SCCM automates drift adaptation through prompt detection, drift magnitude quantification, dynamic hyperparameter tuning, performing shortterm optimization and model recalibration for immediate adjustments, and, when necessary, conducting long-term model recalibration to ensure deeper enhancements in model performance. LSTM-SCCM is incorporated into a suite of cutting-edge online regression models, assessing their performance across various types of concept drift using diverse datasets with varying characteristics. The findings demonstrate that LSTM-SCCM represents a notable advancement in both model performance and efficacy in handling concept drift occurrences. LSTM-SCCM stands out as the sole framework adept at effectively tackling concept drifts within regression scenarios. Its proactive approach to drift adaptation distinguishes it from conventional reactive methods, which typically rely on retraining after significant degradation to model performance caused by drifts. Additionally, LSTM-SCCM employs an in-memory approach combined with the Self-Adjusting Memory (SAM) architecture to enhance real-time processing and adaptability. The framework incorporates variable thresholding techniques and does not assume any particular data distribution, making it an ideal choice for managing high-dimensional datasets and efficiently handling large-scale data. Our experiments, which include abrupt, incremental, and gradual drifts across both low- and high-dimensional datasets with varying noise levels, and applied to four state-of-the-art online regression models, demonstrate that LSTM-SCCM is versatile and effective, rendering it a valuable solution for online regression models to address concept drift.

Keywords: automated drift detection and adaptation, concept drift, hyperparameters optimization, online and adaptive learning, regression

Procedia PDF Downloads 10
699 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 61
698 Attitudes Towards the Supernatural in Benjamin Britten’s The Turn of the Screw

Authors: Yaou Zhang

Abstract:

Background: Relatively little scholarly attention has been paid to the production of Benjamin Britten’s chamber opera The Turn of the Screw. As one of Britten’s most remarkable operas. The story of the libretto was from Henry James’s novella of the same name. The novella was created in 1898 and one of the primary questions addressed to people in the story is “how real the ghosts are,” which leads the story to a huge ambiguity in readers’ minds. Aims: This research focuses on the experience of seeing the opera on stage over several decades. This study of opera productions over time not only provides insight into how stage performances can alter audience members' perceptions of the opera in the present but also reveals a landscape of shifting aesthetics and receptions. Methods: To examine the hypotheses in interpretation and reception, the qualitative analysis is used to examine the figures of ghosts in different productions across the time from 1954 to 2021 in the UK: by accessing recordings, newspapers, and reviews for the productions that are sourced from online and physical archives. For instance, the field research is conducted on the topic by arranging interviews with the creative team and visiting Opera North in Leeds and Britten-Pears Foundation. The collected data reveals the “hidden identity” in creative teams’ interpretations, social preferences, and rediscover that have previously remained unseen. Results: This research presents an angle of Britten’s Screw by using the third position; it shows how the attention moved from the stage of “do the ghosts really exist” to “traumatised children.” Discussion: Critics and audiences have debated whether the governess hallucinates the ghosts in the opera for decades. While, in recent years, directors of new productions have given themselves the opportunity to go deeper into Britten's musical structure and offer the opera more space to be interpreted, rather than debating if "ghosts actually exist" or "the psychological problems of the governess." One can consider and reflect that the questionable actions of the children are because they are suffering from trauma, whether the trauma comes from the ghosts, the hallucinating governess, or some prior experiences: various interpretations cause one result that children are the recipients of trauma. Arguably, the role of the supernatural is neither simply one of the elements of a ghost story nor simply one of the parts of the ambiguity between the supernatural and the hallucination of the governess; rather, the ghosts and the hallucinating governess can exist at the same time - the combination of the supernatural’s and the governess’s behaviours on stage generates a sharper and more serious angle that draws our attention to the traumatized children.

Keywords: benjamin britten, chamber opera, production, reception, staging, the turn of the screw

Procedia PDF Downloads 107
697 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 141
696 Imaging of Underground Targets with an Improved Back-Projection Algorithm

Authors: Alireza Akbari, Gelareh Babaee Khou

Abstract:

Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.

Keywords: algorithm, back-projection, GPR, remote sensing

Procedia PDF Downloads 450
695 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 121
694 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques

Authors: Imed Feki, Faouzi Msahli

Abstract:

Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.

Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique

Procedia PDF Downloads 603
693 Effect of the Orifice Plate Specifications on Coefficient of Discharge

Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer

Abstract:

On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.

Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications

Procedia PDF Downloads 118
692 Analysis of Delays during Initial Phase of Construction Projects and Mitigation Measures

Authors: Sunaitan Al Mutairi

Abstract:

A perfect start is a key factor for project completion on time. The study examined the effects of delayed mobilization of resources during the initial phases of the project. This paper mainly highlights the identification and categorization of all delays during the initial construction phase and their root cause analysis with corrective/control measures for the Kuwait Oil Company oil and gas projects. A relatively good percentage of the delays identified during the project execution (Contract award to end of defects liability period) attributed to mobilization/preliminary activity delays. Data analysis demonstrated significant increase in average project delay during the last five years compared to the previous period. Contractors had delays/issues during the initial phase, which resulted in slippages and progressively increased, resulting in time and cost overrun. Delays/issues not mitigated on time during the initial phase had very high impact on project completion. Data analysis of the delays for the past five years was carried out using trend chart, scatter plot, process map, box plot, relative importance index and Pareto chart. Construction of any project inside the Gathering Centers involves complex management skills related to work force, materials, plant, machineries, new technologies etc. Delay affects completion of projects and compromises quality, schedule and budget of project deliverables. Works executed as per plan during the initial phase and start-up duration of the project construction activities resulted in minor slippages/delays in project completion. In addition, there was a good working environment between client and contractor resulting in better project execution and management. Mainly, the contractor was on the front foot in the execution of projects, which had minimum/no delays during the initial and construction period. Hence, having a perfect start during the initial construction phase shall have a positive influence on the project success. Our research paper studies each type of delay with some real example supported by statistic results and suggests mitigation measures. Detailed analysis carried out with all stakeholders based on impact and occurrence of delays to have a practical and effective outcome to mitigate the delays. The key to improvement is to have proper control measures and periodic evaluation/audit to ensure implementation of the mitigation measures. The focus of this research is to reduce the delays encountered during the initial construction phase of the project life cycle.

Keywords: construction activities delays, delay analysis for construction projects, mobilization delays, oil & gas projects delays

Procedia PDF Downloads 316
691 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 489
690 Application of Forensic Entomology to Estimate the Post Mortem Interval

Authors: Meriem Taleb, Ghania Tail, Fatma Zohra Kara, Brahim Djedouani, T. Moussa

Abstract:

Forensic entomology has grown immensely as a discipline in the past thirty years. The main purpose of forensic entomology is to establish the post mortem interval or PMI. Three days after the death, insect evidence is often the most accurate and sometimes the only method of determining elapsed time since death. This work presents the estimation of the PMI in an experiment to test the reliability of the accumulated degree days (ADD) method and the application of this method in a real case. The study was conducted at the Laboratory of Entomology at the National Institute for Criminalistics and Criminology of the National Gendarmerie, Algeria. The domestic rabbit Oryctolagus cuniculus L. was selected as the animal model. On 08th July 2012, the animal was killed. Larvae were collected and raised to adulthood. Estimation of oviposition time was calculated by summing up average daily temperatures minus minimum development temperature (also specific to each species). When the sum is reached, it corresponds to the oviposition day. Weather data were obtained from the nearest meteorological station. After rearing was accomplished, three species emerged: Lucilia sericata, Chrysomya albiceps, and Sarcophaga africa. For Chrysomya albiceps species, a cumulation of 186°C is necessary. The emergence of adults occured on 22nd July 2012. A value of 193.4°C is reached on 9th August 2012. Lucilia sericata species require a cumulation of 207°C. The emergence of adults occurred on 23rd, July 2012. A value of 211.35°C is reached on 9th August 2012. We should also consider that oviposition may occur more than 12 hours after death. Thus, the obtained PMI is in agreement with the actual time of death. We illustrate the use of this method during the investigation of a case of a decaying human body found on 03rd March 2015 in Bechar, South West of Algerian desert. Maggots were collected and sent to the Laboratory of Entomology. Lucilia sericata adults were identified on 24th March 2015 after emergence. A sum of 211.6°C was reached on 1st March 2015 which corresponds to the estimated day of oviposition. Therefore, the estimated date of death is 1st March 2015 ± 24 hours. The estimated PMI by accumulated degree days (ADD) method seems to be very precise. Entomological evidence should always be used in homicide investigations when the time of death cannot be determined by other methods.

Keywords: forensic entomology, accumulated degree days, postmortem interval, diptera, Algeria

Procedia PDF Downloads 293
689 Simulation Study on Polymer Flooding with Thermal Degradation in Elevated-Temperature Reservoirs

Authors: Lin Zhao, Hanqiao Jiang, Junjian Li

Abstract:

Polymers injected into elevated-temperature reservoirs inevitably suffer from thermal degradation, resulting in severe viscosity loss and poor flooding performance. However, for polymer flooding in such reservoirs, present simulators fail to provide accurate results for lack of description on thermal degradation. In light of this, the objectives of this paper are to provide a simulation model for polymer flooding with thermal degradation and study the effect of thermal degradation on polymer flooding in elevated-temperature reservoirs. Firstly, a thermal degradation experiment was conducted to obtain the degradation law of polymer concentration and viscosity. Different types of polymers degraded in the Thermo tank with elevated temperatures. Afterward, based on the obtained law, a streamline-assistant model was proposed to simulate the degradation process under in-situ flow conditions. Model validation was performed with field data from a well group of an offshore oilfield. Finally, the effect of thermal degradation on polymer flooding was studied using the proposed model. Experimental results showed that the polymer concentration remained unchanged, while the viscosity degraded exponentially with time after degradation. The polymer viscosity was functionally dependent on the polymer degradation time (PDT), which represented the elapsed time started from the polymer particle injection. Tracing the real flow path of polymer particle was required. Therefore, the presented simulation model was streamline-assistant. Equation of PDT vs. time of flight (TOF) along streamline was built by the law of polymer particle transport. Based on the field polymer sample and dynamic data, the new model proved its accuracy. Study of degradation effect on polymer flooding indicated: (1) the viscosity loss increased with TOF exponentially in the main body of polymer-slug and remained constant in the slug front; (2) the responding time of polymer flooding was delayed, but the effective time was prolonged; (3) the breakthrough of subsequent water was eased; (4) the capacity of polymer adjusting injection profile was diminished; (5) the incremental recovery was reduced significantly. In general, the effect of thermal degradation on polymer flooding performance was rather negative. This paper provides a more comprehensive insight into polymer thermal degradation in both the physical process and field application. The proposed simulation model offers an effective means for simulating the polymer flooding process with thermal degradation. The negative effect of thermal degradation suggests that the polymer thermal stability should be given full consideration when designing polymer flooding project in elevated-temperature reservoirs.

Keywords: polymer flooding, elevated-temperature reservoir, thermal degradation, numerical simulation

Procedia PDF Downloads 138
688 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing

Authors: Tolulope Aremu

Abstract:

This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.

Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving

Procedia PDF Downloads 25
687 Comparison of Parametric and Bayesian Survival Regression Models in Simulated and HIV Patient Antiretroviral Therapy Data: Case Study of Alamata Hospital, North Ethiopia

Authors: Zeytu G. Asfaw, Serkalem K. Abrha, Demisew G. Degefu

Abstract:

Background: HIV/AIDS remains a major public health problem in Ethiopia and heavily affecting people of productive and reproductive age. We aimed to compare the performance of Parametric Survival Analysis and Bayesian Survival Analysis using simulations and in a real dataset application focused on determining predictors of HIV patient survival. Methods: A Parametric Survival Models - Exponential, Weibull, Log-normal, Log-logistic, Gompertz and Generalized gamma distributions were considered. Simulation study was carried out with two different algorithms that were informative and noninformative priors. A retrospective cohort study was implemented for HIV infected patients under Highly Active Antiretroviral Therapy in Alamata General Hospital, North Ethiopia. Results: A total of 320 HIV patients were included in the study where 52.19% females and 47.81% males. According to Kaplan-Meier survival estimates for the two sex groups, females has shown better survival time in comparison with their male counterparts. The median survival time of HIV patients was 79 months. During the follow-up period 89 (27.81%) deaths and 231 (72.19%) censored individuals registered. The average baseline cluster of differentiation 4 (CD4) cells count for HIV/AIDS patients were 126.01 but after a three-year antiretroviral therapy follow-up the average cluster of differentiation 4 (CD4) cells counts were 305.74, which was quite encouraging. Age, functional status, tuberculosis screen, past opportunistic infection, baseline cluster of differentiation 4 (CD4) cells, World Health Organization clinical stage, sex, marital status, employment status, occupation type, baseline weight were found statistically significant factors for longer survival of HIV patients. The standard error of all covariate in Bayesian log-normal survival model is less than the classical one. Hence, Bayesian survival analysis showed better performance than classical parametric survival analysis, when subjective data analysis was performed by considering expert opinions and historical knowledge about the parameters. Conclusions: Thus, HIV/AIDS patient mortality rate could be reduced through timely antiretroviral therapy with special care on the potential factors. Moreover, Bayesian log-normal survival model was preferable than the classical log-normal survival model for determining predictors of HIV patients survival.

Keywords: antiretroviral therapy (ART), Bayesian analysis, HIV, log-normal, parametric survival models

Procedia PDF Downloads 195
686 Study of the Combinatorial Impact of Substrate Properties on Mesenchymal Stem Cell Migration Using Microfluidics

Authors: Nishanth Venugopal Menon, Chuah Yon Jin, Samantha Phey, Wu Yingnan, Zhang Ying, Vincent Chan, Kang Yuejun

Abstract:

Cell Migration is a vital phenomenon that the cells undergo in various physiological processes like wound healing, disease progression, embryogenesis, etc. Cell migration depends primarily on the chemical and physical cues available in the cellular environment. The chemical cue involves the chemokines secreted and gradients generated in the environment while physical cues indicate the impact of matrix properties like nanotopography and stiffness on the cells. Mesenchymal Stem Cells (MSCs) have been shown to have a role wound healing in vivo and its migration to the site of the wound has been shown to have a therapeutic effect. In the field of stem cell based tissue regeneration of bones and cartilage, one approach has been to introduce scaffold laden with MSCs into the site of injury to enable tissue regeneration. In this work, we have studied the combinatorial impact of the substrate physical properties on MSC migration. A microfluidic in vitro model was created to perform the migration studies. The microfluidic model used is a three compartment device consisting of two cell seeding compartments and one migration compartment. Four different PDMS substrates with varying substrate roughness, stiffness and hydrophobicity were created. Its surface roughness and stiffness was measured using Atomic Force Microscopy (AFM) while its hydrphobicity was measured from the water contact angle using an optical tensiometer. These PDMS substrates are sealed to the microfluidic chip following which the MSCs are seeded and the cell migration is studied over the period of a week. Cell migration was quantified using fluorescence imaging of the cytoskeleton (F-actin) to find out the area covered by the cells inside the migration compartment. The impact of adhesion proteins on cell migration was also quantified using a real-time polymerase chain reaction (qRT PCR). These results suggested that the optimal substrate for cell migration would be one with an intermediate level of roughness, stiffness and hydrophobicity. A higher or lower value of these properties affected cell migration negatively. These observations have helped us in understanding that different substrate properties need to be considered in tandem, especially while designing scaffolds for tissue regeneration as cell migration is normally impacted by the combinatorial impact of the matrix. These observations may lead us to scaffold optimization in future tissue regeneration applications.

Keywords: cell migration, microfluidics, in vitro model, stem cell migration, scaffold, substrate properties

Procedia PDF Downloads 555
685 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools

Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus

Abstract:

Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.

Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects

Procedia PDF Downloads 262
684 Acrylic Microspheres-Based Microbial Bio-Optode for Nitrite Ion Detection

Authors: Siti Nur Syazni Mohd Zuki, Tan Ling Ling, Nina Suhaity Azmi, Chong Kwok Feng, Lee Yook Heng

Abstract:

Nitrite (NO2-) ion is used prevalently as a preservative in processed meat. Elevated levels of nitrite also found in edible bird’s nests (EBNs). Consumption of NO2- ion at levels above the health-based risk may cause cancer in humans. Spectrophotometric Griess test is the simplest established standard method for NO2- ion detection, however, it requires careful control of pH of each reaction step and susceptible to strong oxidants and dyeing interferences. Other traditional methods rely on the use of laboratory-scale instruments such as GC-MS, HPLC and ion chromatography, which cannot give real-time response. Therefore, it is of significant need for devices capable of measuring nitrite concentration in-situ, rapidly and without reagents, sample pretreatment or extraction step. Herein, we constructed a microspheres-based microbial optode for visual quantitation of NO2- ion. Raoutella planticola, the bacterium expressing NAD(P)H nitrite reductase (NiR) enzyme has been successfully extracted by microbial technique from EBN collected from local birdhouse. The whole cells and the lipophilic Nile Blue chromoionophore were physically absorbed on the photocurable poly(n-butyl acrylate-N-acryloxysuccinimide) [poly (nBA-NAS)] microspheres, whilst the reduced coenzyme NAD(P)H was covalently immobilized on the succinimide-functionalized acrylic microspheres to produce a reagentless biosensing system. Upon the NiR enzyme catalyzes the oxidation of NAD(P)H to NAD(P)+, NO2- ion is reduced to ammonium hydroxide, and that a colour change from blue to pink of the immobilized Nile Blue chromoionophore is perceived as a result of deprotonation reaction increasing the local pH in the microspheres membrane. The microspheres-based optosensor was optimized with a reflectance spectrophotometer at 639 nm and pH 8. The resulting microbial bio-optode membrane could quantify NO2- ion at 0.1 ppm and had a linear response up to 400 ppm. Due to the large surface area to mass ratio of the acrylic microspheres, it allows efficient solid state diffusional mass transfer of the substrate to the bio-recognition phase, and achieve the steady state response as fast as 5 min. The proposed optical microbial biosensor requires no sample pre-treatment step and possesses high stability as the whole cell biocatalyst provides protection to the enzymes from interfering substances, hence it is suitable for measurements in contaminated samples.

Keywords: acrylic microspheres, microbial bio-optode, nitrite ion, reflectometric

Procedia PDF Downloads 445
683 Method for Controlling the Groundwater Polluted by the Surface Waters through Injection Wells

Authors: Victorita Radulescu

Abstract:

Introduction: The optimum exploitation of agricultural land in the presence of an aquifer polluted by the surface sources requires close monitoring of groundwater level in both periods of intense irrigation and in absence of the irrigations, in times of drought. Currently in Romania, in the south part of the country, the Baragan area, many agricultural lands are confronted with the risk of groundwater pollution in the absence of systematic irrigation, correlated with the climate changes. Basic Methods: The non-steady flow of the groundwater from an aquifer can be described by the Bousinesq’s partial differential equation. The finite element method was used, applied to the porous media needed for the water mass balance equation. By the proper structure of the initial and boundary conditions may be modeled the flow in drainage or injection systems of wells, according to the period of irrigation or prolonged drought. The boundary conditions consist of the groundwater levels required at margins of the analyzed area, in conformity to the reality of the pollutant emissaries, following the method of the double steps. Major Findings/Results: The drainage condition is equivalent to operating regimes on the two or three rows of wells, negative, as to assure the pollutant transport, modeled with the variable flow in groups of two adjacent nodes. In order to obtain the level of the water table, in accordance with the real constraints, are needed, for example, to be restricted its top level below of an imposed value, required in each node. The objective function consists of a sum of the absolute values of differences of the infiltration flow rates, increased by a large penalty factor when there are positive values of pollutant. In these conditions, a balanced structure of the pollutant concentration is maintained in the groundwater. The spatial coordinates represent the modified parameters during the process of optimization and the drainage flows through wells. Conclusions: The presented calculation scheme was applied to an area having a cross-section of 50 km between two emissaries with various levels of altitude and different values of pollution. The input data were correlated with the measurements made in-situ, such as the level of the bedrock, the grain size of the field, the slope, etc. This method of calculation can also be extended to determine the variation of the groundwater in the aquifer following the flood wave propagation in envoys.

Keywords: environmental protection, infiltrations, numerical modeling, pollutant transport through soils

Procedia PDF Downloads 154
682 Psychological Consultation of Married Couples at Various Stages of Formation of the Young Family

Authors: Gulden Aykinbaeva, Assem Umirzakova, Assel Makhadiyeva

Abstract:

The problem of studying of young married couples in connection with a change of social institute of a family and marriage is represented very actual for family consultation, considering a family role in the development of modern society. Results of numerous researchs say that one of difficult in formation and stabilization of a matrimony is the period of a young family. This period is characterized by various processes of integration, adaptation and emotional compatibility of spouses. The young family in it the period endures the first standard crisis which postpones a print for the further development of the family scenario. Emergence new, earlier not existing, systems of values render a huge value on the process of formation of a young family and each of spouses separately. Possibly to solve the set family tasks at the development of the uniform system of the family relations in which socially mature persons capable to consider a family as the creativity of each other act as subjects. Due to the research objective in work the following techniques were used: a questionnaire of satisfaction with V. V. Stolin's marriage and A. N. Volkova's technique directed on detection of coherence of family values and role installations in a married couple, and also content – the analysis. Development of an internal basis of a family on mutual clearing of values is important during the work with married couples. 'The mature view' of the partner in the marriage union provides coherence between the expected and real behavior of the partner that is important for the realization of the purposes of adaptation in a family. For research of communication of the data obtained by means of A. N. Volkova's techniques, V. V. Stolina and content – the analysis, the correlation analysis, with the application of the criterion of Spirmen was used. The analysis of results of the conducted research allowed us to determine the number of consistent patterns: 1. Nature of change of satisfaction with marriage at spouses testifies that the matrimonial relations undergo high-quality changes at different stages of formation of a young family. 2. The matrimonial relations in the course of their development, formation and functioning in young marriage undergo considerable changes on psychological, social and psychological and insignificant — at the psychophysiological and sociocultural levels. The material received by us allows to plan ways of further detailed researches of the development of the matrimonial relations not only in the young marriage but also at further stages of development of a matrimony. We believe that the results received in this research can be almost applied at creation of algorithms of selection of marriage partners, at diagnostics of character and the maintenance of matrimonial disharmonies, at the forecast of stability of marriage and a family.

Keywords: married couples, formation of the young family, psychological consultation, matrimony

Procedia PDF Downloads 394
681 Protective Role of Autophagy Challenging the Stresses of Type 2 Diabetes and Dyslipidemia

Authors: Tanima Chatterjee, Maitree Bhattacharyya

Abstract:

The global challenge of type 2 diabetes mellitus is a major health concern in this millennium, and researchers are continuously exploring new targets to develop a novel therapeutic strategy. Type 2 diabetes mellitus (T2DM) is often coupled with dyslipidemia increasing the risks for cardiovascular (CVD) complications. Enhanced oxidative and nitrosative stresses appear to be the major risk factors underlying insulin resistance, dyslipidemia, β-cell dysfunction, and T2DM pathogenesis. Autophagy emerges to be a promising defense mechanism against stress-mediated cell damage regulating tissue homeostasis, cellular quality control, and energy production, promoting cell survival. In this study, we have attempted to explore the pivotal role of autophagy in T2DM subjects with or without dyslipidemia in peripheral blood mononuclear cells and insulin-resistant HepG2 cells utilizing flow cytometric platform, confocal microscopy, and molecular biology techniques like western blotting, immunofluorescence, and real-time polymerase chain reaction. In the case of T2DM with dyslipidemia higher population of autophagy, positive cells were detected compared to patients with the only T2DM, which might have resulted due to higher stress. Autophagy was observed to be triggered both by oxidative and nitrosative stress revealing a novel finding of our research. LC3 puncta was observed in peripheral blood mononuclear cells and periphery of HepG2 cells in the case of the diabetic and diabetic-dyslipidemic conditions. Increased expression of ATG5, LC3B, and Beclin supports the autophagic pathway in both PBMC and insulin-resistant Hep G2 cells. Upon blocking autophagy by 3-methyl adenine (3MA), the apoptotic cell population increased significantly, as observed by caspase‐3 cleavage and reduced expression of Bcl2. Autophagy has also been evidenced to control oxidative stress-mediated up-regulation of inflammatory markers like IL-6 and TNF-α. To conclude, this study elucidates autophagy to play a protective role in the case of diabetes mellitus with dyslipidemia. In the present scenario, this study demands to have a significant impact on developing a new therapeutic strategy for diabetic dyslipidemic subjects by enhancing autophagic activity.

Keywords: autophagy, apoptosis, dyslipidemia, reactive oxygen species, reactive nitrogen species, Type 2 diabetes

Procedia PDF Downloads 128
680 Decision-Making Process Based on Game Theory in the Process of Urban Transformation

Authors: Cemil Akcay, Goksun Yerlikaya

Abstract:

Buildings are the living spaces of people with an active role in every aspect of life in today's world. While some structures have survived from the early ages, most of the buildings that completed their lifetime have not transported to the present day. Nowadays, buildings that do not meet the social, economic, and safety requirements of the age return to life with a transformation process. This transformation is called urban transformation. Urban transformation is the renewal of the areas with a risk of disaster and the technological infrastructure required by the structure. The transformation aims to prevent damage to earthquakes and other disasters by rebuilding buildings that have completed their non-earthquake-resistant economic life. It is essential to decide on other issues related to conversion and transformation in places where most of the building stock should transform into the first-degree earthquake belt, such as Istanbul. In urban transformation, property owners, local authority, and contractor must deal at a common point. Considering that hundreds of thousands of property owners are sometimes in the areas of transformation, it is evident how difficult it is to make the deal and decide. For the optimization of these decisions, the use of game theory is foreseeing. The main problem in this study is that the urban transformation is carried out in place, or the building or buildings are transport to a different location. There are many stakeholders in the Istanbul University Cerrahpaşa Medical Faculty Campus, which is planned to be carried out in the process of urban transformation, was tried to solve the game theory applications. An analysis of the decisions given on a real urban transformation project and the logical suitability of decisions taken without the use of game theory were also supervised using game theory. In each step of this study, many decision-makers are classifying according to a specific logical sequence, and in the game trees that emerged as a result of this classification, Nash balances were tried to observe, and optimum decisions were determined. All decisions taken for this project have been subjected to two significant differentiated comparisons using game theory, and as decisions are taken without the use of game theory, and according to the results, solutions for the decision phase of the urban transformation process introduced. The game theory model developed from beginning to the end of the urban transformation process, particularly as a solution to the difficulty of making rational decisions in large-scale projects with many participants in the decision-making process. The use of a decision-making mechanism can provide an optimum answer to the demands of the stakeholders. In today's world for the construction sector, it is also seeing that the game theory is a non-surprising consequence of the fact that it is the most critical issues of planning and making the right decision in future years.

Keywords: urban transformation, the game theory, decision making, multi-actor project

Procedia PDF Downloads 140
679 Emissions and Total Cost of Ownership Assessment of Hybrid Propulsion Concepts for Bus Transport with Compressed Natural Gases or Diesel Engine

Authors: Volker Landersheim, Daria Manushyna, Thinh Pham, Dai-Duong Tran, Thomas Geury, Omar Hegazy, Steven Wilkins

Abstract:

Air pollution is one of the emerging problems in our society. Targets of reduction of CO₂ emissions address low-carbon and resource-efficient transport. (Plug-in) hybrid electric propulsion concepts offer the possibility to reduce total cost of ownership (TCO) and emissions for public transport vehicles (e.g., bus application). In this context, typically, diesel engines are used to form the hybrid propulsion system of the vehicle. Though the technological development of diesel engines experience major advantages, some challenges such as the high amount of particle emissions remain relevant. Gaseous fuels (i.e., compressed natural gases (CNGs) or liquefied petroleum gases (LPGs) represent an attractive alternative to diesel because of their composition. In the framework of the research project 'Optimised Real-world Cost-Competitive Modular Hybrid Architecture' (ORCA), which was funded by the EU, two different hybrid-electric propulsion concepts have been investigated: one using a diesel engine as internal combustion engine and one using CNG as fuel. The aim of the current study is to analyze specific benefits for the aforementioned hybrid propulsion systems for predefined driving scenarios with regard to emissions and total cost of ownership in bus application. Engine models based on experimental data for diesel and CNG were developed. For the purpose of designing optimal energy management strategies for each propulsion system, maps-driven or quasi-static models for specific engine types are used in the simulation framework. An analogous modelling approach has been chosen to represent emissions. This paper compares the two concepts regarding their CO₂ and NOx emissions. This comparison is performed for relevant bus missions (urban, suburban, with and without zero-emission zone) and with different energy management strategies. In addition to the emissions, also the downsizing potential of the combustion engine has been analysed to minimize the powertrain TCO (pTCO) for plug-in hybrid electric buses. The results of the performed analyses show that the hybrid vehicle concept using the CNG engine shows advantages both with respect to emissions as well as to pTCO. The pTCO is 10% lower, CO₂ emissions are 13% lower, and the NOx emissions are more than 50% lower than with the diesel combustion engine. These results are consistent across all usage profiles under investigation.

Keywords: bus transport, emissions, hybrid propulsion, pTCO, CNG

Procedia PDF Downloads 146
678 Old Houses for Tomorrow: Deliberating a Societal Need for Conserving Unprotected Heritage Houses in India

Authors: Protyoy Sen

Abstract:

Heritage conservation often holds different meanings and values for different people. To a cultural or architectural body it might be about protecting relics of the past, while for an government body or corporate it might be the value of the real estate which generates profits in terms of hospitality, tourism or some form of trade. But often, a significant proportion of the built fabric in our cities comprises of what usually does not come under the common lenses of collective heritage or conservation i.e. private houses. Standing ode to a bygone era of different communities, trades and practices that once inhabited the city, old private houses of certain architectural or historic character face the gravest challenges of heritage conservation. These – despite being significant to the heritage fabric of a city – neither get the social attention nor the financial aid for repair and periodic maintenance, that many monuments and public buildings do. The situation in India is no different. Private residences belonging to affluent families of an earlier time, today lie in varying degrees of neglect and dilapidation. With the growth of nuclear families, drastic change in people’s and expensive repairs of historic material fabric (amongst other reasons), houses of heritage value often become liabilities, and metaphorical to a white elephant in a poor man’s backyard. In a capitalistic setup that values time and money over everything, it is not reasonable that one justifies the conservation of individual / family assets solely through architectural, historical or cultural values. It is quite logical them, that the houseowner – in most cases, a layperson – must be made to understand of both tangible and intangible values in order to (1) take the trouble of the effort, resources and aid (if possible) to repair and maintain a house of heritage character and, (2) choose to invest into a building that today might’ve lost its practical relevance, over demolishing and building new. The question that still remains is – Why? If heritage conservation is to be seen as an economically viable and realistic building activity, it must shed its image of being an ‘elitist, cultural pursuit’ in the eyes of the common person. Through contextual studies of historic areas in Ahmedabad and Calcutta, reading of theoretical pieces on the subject and conversations with multiple stakeholders, this study intended to justify the act of heritage conservation to the common person – one who is assumed to have no particular sensitivity towards architectural or cultural value, and rather questions what these buildings tangibly bring to the table. The theoretical frameworks (taken from literature) are then tested through actual case studies in Indian cities, followed by an elaborate inference on the subject.

Keywords: heritage values, heritage houses, private ownership, unprotected heritage

Procedia PDF Downloads 55
677 Criminal Law and Internet of Things: Challenges and Threats

Authors: Celina Nowak

Abstract:

The development of information and communication technologies (ICT) and a consequent growth of cyberspace have become a reality of modern societies. The newest addition to this complex structure has been Internet of Things which is due to the appearance of smart devices. IoT creates a new dimension of the network, as the communication is no longer the domain of just humans, but has also become possible between devices themselves. The possibility of communication between devices, devoid of human intervention and real-time supervision, generated new societal and legal challenges. Some of them may and certainly will eventually be connected to criminal law. Legislators both on national and international level have been struggling to cope with this technologically evolving environment in order to address new threats created by the ICT. There are legal instruments on cybercrime, however imperfect and not of universal scope, sometimes referring to specific types of prohibited behaviors undertaken by criminals, such as money laundering, sex offences. However, the criminal law seems largely not prepared to the challenges which may arise because of the development of IoT. This is largely due to the fact that criminal law, both on national and international level, is still based on the concept of perpetration of an offence by a human being. This is a traditional approach, historically and factually justified. Over time, some legal systems have developed or accepted the possibility of commission of an offence by a corporation, a legal person. This is in fact a legal fiction, as a legal person cannot commit an offence as such, it needs humans to actually behave in a certain way on its behalf. Yet, the legislators have come to understand that corporations have their own interests and may benefit from crime – and therefore need to be penalized. This realization however has not been welcome by all states and still give rise to doubts of ontological and theoretical nature in many legal systems. For this reason, in many legislations the liability of legal persons for commission of an offence has not been recognized as criminal responsibility. With the technological progress and the growing use of IoT the discussions referring to criminal responsibility of corporations seem rather inadequate. The world is now facing new challenges and new threats related to the ‘smart’ things. They will have to be eventually addressed by legislators if they want to, as they should, to keep up with the pace of technological and societal evolution. This will however require a reevaluation and possibly restructuring of the most fundamental notions of modern criminal law, such as perpetration, guilt, participation in crime. It remains unclear at this point what norms and legal concepts will be and may be established. The main goal of the research is to point out to the challenges ahead of the national and international legislators in the said context and to attempt to formulate some indications as to the directions of changes, having in mind serious threats related to privacy and security related to the use of IoT.

Keywords: criminal law, internet of things, privacy, security threats

Procedia PDF Downloads 161