Search results for: optimization of operation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5648

Search results for: optimization of operation

698 Relation Between Traffic Mix and Traffic Accidents in a Mixed Industrial Urban Area

Authors: Michelle Eliane Hernández-García, Angélica Lozano

Abstract:

The traffic accidents study usually contemplates the relation between factors such as the type of vehicle, its operation, and the road infrastructure. Traffic accidents can be explained by different factors, which have a greater or lower relevance. Two zones are studied, a mixed industrial zone and the extended zone of it. The first zone has mainly residential (57%), and industrial (23%) land uses. Trucks are mainly on the roads where industries are located. Four sensors give information about traffic and speed on the main roads. The extended zone (which includes the first zone) has mainly residential (47%) and mixed residential (43%) land use, and just 3% of industrial use. The traffic mix is composed mainly of non-trucks. 39 traffic and speed sensors are located on main roads. The traffic mix in a mixed land use zone, could be related to traffic accidents. To understand this relation, it is required to identify the elements of the traffic mix which are linked to traffic accidents. Models that attempt to explain what factors are related to traffic accidents have faced multiple methodological problems for obtaining robust databases. Poisson regression models are used to explain the accidents. The objective of the Poisson analysis is to estimate a vector to provide an estimate of the natural logarithm of the mean number of accidents per period; this estimate is achieved by standard maximum likelihood procedures. For the estimation of the relation between traffic accidents and the traffic mix, the database is integrated of eight variables, with 17,520 observations and six vectors. In the model, the dependent variable is the occurrence or non-occurrence of accidents, and the vectors that seek to explain it, correspond to the vehicle classes: C1, C2, C3, C4, C5, and C6, respectively, standing for car, microbus, and van, bus, unitary trucks (2 to 6 axles), articulated trucks (3 to 6 axles) and bi-articulated trucks (5 to 9 axles); in addition, there is a vector for the average speed of the traffic mix. A Poisson model is applied, using a logarithmic link function and a Poisson family. For the first zone, the Poisson model shows a positive relation among traffic accidents and C6, average speed, C3, C2, and C1 (in a decreasing order). The analysis of the coefficient shows a high relation with bi-articulated truck and bus (C6 and the C3), indicating an important participation of freight trucks. For the expanded zone, the Poisson model shows a positive relation among traffic accidents and speed average, biarticulated truck (C6), and microbus and vans (C2). The coefficients obtained in both Poisson models shows a higher relation among freight trucks and traffic accidents in the first industrial zone than in the expanded zone.

Keywords: freight transport, industrial zone, traffic accidents, traffic mix, trucks

Procedia PDF Downloads 115
697 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 140
696 Advanced Technology for Natural Gas Liquids (NGL) Recovery Using Residue Gas Split

Authors: Riddhiman Sherlekar, Umang Paladia, Rachit Desai, Yash Patel

Abstract:

The competitive scenario of the oil and gas market is a challenge for today’s plant designers to achieve designs that meet client expectations with shrinking budgets, safety requirements, and operating flexibility. Natural Gas Liquids have three main industrial uses. They can be used as fuels, or as petrochemical feedstock or as refinery blends that can be further processed and sold as straight run cuts, such as naphtha, kerosene and gas oil. NGL extraction is not a chemical reaction. It involves the separation of heavier hydrocarbons from the main gas stream through pressure as temperature reduction, which depending upon the degree of NGL extraction may involve cryogenic process. Previous technologies i.e. short cycle dry desiccant absorption, Joule-Thompson or Low temperature refrigeration, lean oil absorption have been giving results of only 40 to 45% ethane recoveries, which were unsatisfying depending upon the current scenario of down turn market. Here new technology has been suggested for boosting up the recoveries of ethane+ up to 95% and up to 99% for propane+ components. Cryogenic plants provide reboiling to demethanizers by using part of inlet feed gas, or inlet feed split. If the two stream temperatures are not similar, there is lost work in the mixing operation unless the designer has access to some proprietary design. The concept introduced in this process consists of reboiling the demethanizer with the residue gas, or residue gas split. The innovation of this process is that it does not use the typical inlet gas feed split type of flow arrangement to reboil the demethanizer or deethanizer column, but instead uses an open heat pump scheme to that effect. The residue gas compressor provides the heat pump effect. The heat pump stream is then further cooled and entered in the top section of the column as a cold reflux. Because of the nature of this design, this process offers the opportunity to operate at full ethane rejection or recovery. The scheme is also very adaptable to revamp existing facilities. This advancement can be proven not only in enhancing the results but also provides operational flexibility, optimize heat exchange, introduces equipment cost reduction, opens a future for the innovative designs while keeping execution costs low.

Keywords: deethanizer, demethanizer, residue gas, NGL

Procedia PDF Downloads 246
695 Advanced Compound Coating for Delaying Corrosion of Fast-Dissolving Alloy in High Temperature and Corrosive Environment

Authors: Lei Zhao, Yi Song, Tim Dunne, Jiaxiang (Jason) Ren, Wenhan Yue, Lei Yang, Li Wen, Yu Liu

Abstract:

Fasting dissolving magnesium (DM) alloy technology has contributed significantly to the “Shale Revolution” in oil and gas industry. This application requires DM downhole tools dissolving initially at a slow rate, rapidly accelerating to a high rate after certain period of operation time (typically 8 h to 2 days), a contradicting requirement that can hardly be addressed by traditional Mg alloying or processing itself. Premature disintegration has been broadly reported in downhole DM tool from field trials. To address this issue, “temporary” thin polymers of various formulations are currently coated onto DM surface to delay its initial dissolving. Due to conveying parts, harsh downhole condition, and high dissolving rate of the base material, the current delay coatings relying on pure polymers are found to perform well only at low temperature (typical < 100 ℃) and parts without sharp edges or corners, as severe geometries prevent high quality thin film coatings from forming effectively. In this study, a coating technology combining Plasma Electrolytic Oxide (PEO) coatings with advanced thin film deposition has been developed, which can delay DM complex parts (with sharp corners) in corrosive fluid at 150 ℃ for over 2 days. Synergistic effects between porous hard PEO coating and chemical inert elastic-polymer sealing leads to its delaying dissolution improvement, and strong chemical/physical bonding between these two layers has been found to play essential role. Microstructure of this advanced coating and compatibility between PEO and various polymer selections has been thoroughly investigated and a model is also proposed to explain its delaying performance. This study could not only benefit oil and gas industry to unplug their High Temperature High Pressure (HTHP) unconventional resources inaccessible before, but also potentially provides a technical route for other industries (e.g., bio-medical, automobile, aerospace) where primer anti-corrosive protection on light Mg alloy is highly demanded.

Keywords: dissolvable magnesium, coating, plasma electrolytic oxide, sealer

Procedia PDF Downloads 89
694 Sustainable Development Approach for Coastal Erosion Problem in Thailand: Using Bamboo Sticks to Rehabilitate Coastal Erosion

Authors: Sutida Maneeanakekul, Dusit Wechakit, Somsak Piriyayota

Abstract:

Coastal erosion is a major problem in Thailand, in both the Gulf of Thailand and the Andaman Sea coasts. According to the Department of Marine and Coastal Resources, land erosion occurred along the 200 km coastline with an average rate of 5 meters/year. Coastal erosion affects public and government properties, as well as the socio-economy of the country, including emigration in coastal communities, loss of habitats, and decline in fishery production. To combat the problem of coastal erosion, projects utilizing bamboo sticks for coastal defense against erosion were carried out in 5 areas beginning in November, 2010, including: Pak Klong Munharn- Samut Songkhram Province; Ban Khun Samutmaneerat, Pak Klong Pramong and Chao Matchu Shrine-Samut Sakhon Province,and Pak Klong Hongthong – Chachoengsao Province by Marine and Coastal Resources Department. In 2012, an evaluation of the effectiveness of solving the problem of coastal erosion by using bamboo stick was carried out, with a focus on three aspects. Firstly, the change in physical and biological features after using the bamboo stick technique was assessed. Secondly, participation of people in the community in the way of managing the problem of coastal erosion were these aspects evaluated as part of the study. The last aspect that was evaluated is the satisfaction of the community toward this technique. The results of evaluation showed that the amounts of sediment have dramatically changed behind the bamboo sticks lines. The increase of sediment was found to be about 23.50-56.20 centimeters (during 2012-2013). In terms of biological aspect, there has been an increase in mangrove forest areas, especially at Bang Ya Prak, Samut Sakhon Province. Average tree density was found to be about 4,167 trees per square meter. Additionally, an increase in production of fisheries was observed. Presently, the change in the evaluated physical features tends to increase in every aspect, including the satisfaction of people in community toward the process of solving the erosion problem. People in the community are involved in the preparatory, operation, monitoring and evaluation process to resolve the problem in the medium levels.

Keywords: bamboo sticks, coastal erosion, rehabilitate, Thailand sustainable development approach

Procedia PDF Downloads 218
693 The Numerical Model of the Onset of Acoustic Oscillation in Pulse Tube Engine

Authors: Alexander I. Dovgyallo, Evgeniy A. Zinoviev, Svetlana O. Nekrasova

Abstract:

The most of works applied for the pulse tube converters contain the workflow description implemented through the use of mathematical models on stationary modes. However, the study of the thermoacoustic systems unsteady behavior in the start, stop, and acoustic load changes modes is in the particular interest. The aim of the present study was to develop a mathematical thermal excitation model of acoustic oscillations in pulse tube engine (PTE) as a small-scale scheme of pulse tube engine operating at atmospheric air. Unlike some previous works this standing wave configuration is a fully closed system. The improvements over previous mathematical models are the following: the model allows specifying any values of porosity for regenerator, takes into account the piston weight and the friction in the cylinder and piston unit, and determines the operating frequency. The numerical method is based on the relation equations between the pressure and volume velocity variables at the ends of each element of PTE which is recorded through the appropriate transformation matrix. A solution demonstrates that the PTE operation frequency is the complex value, and it depends on the piston mass and the dynamic friction due to its movement in the cylinder. On the basis of the determined frequency thermoacoustically induced heat transport and generation of acoustic power equations were solved for channel with temperature gradient on its ends. The results of numerical simulation demonstrate the features of the initialization process of oscillation and show that that generated acoustic power more than power on the steady mode in a factor of 3…4. But doesn`t mean the possibility of its further continuous utilizing due to its existence only in transient mode which lasts only for a 30-40 sec. The experiments were carried out on small-scale PTE. The results shows that the value of acoustic power is in the range of 0.7..1.05 W for the defined frequency range f = 13..18 Hz and pressure amplitudes 11..12 kPa. These experimental data are satisfactorily correlated with the numerical modeling results. The mathematical model can be straightforwardly applied for the thermoacoustic devices with variable temperatures of thermal reservoirs and variable transduction loads which are expected to occur in practical implementations of portable thermoacoustic engines.

Keywords: nonlinear processes, pulse tube engine, thermal excitation, standing wave

Procedia PDF Downloads 355
692 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture

Authors: Charbel Aoun, Loic Lagadec

Abstract:

A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.

Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS

Procedia PDF Downloads 153
691 Chemical Synthesis, Characterization and Dose Optimization of Chitosan-Based Nanoparticles of MCPA for Management of Broad-Leaved Weeds (Chenopodium album, Lathyrus aphaca, Angalis arvensis and Melilotus indica) of Wheat

Authors: Muhammad Ather Nadeem, Bilal Ahmad Khan, Tasawer Abbas

Abstract:

Nanoherbicides utilize nanotechnology to enhance the delivery of biological or chemical herbicides using combinations of nanomaterials. The aim of this research was to examine the efficacy of chitosan nanoparticles containing MCPA herbicide as a potential eco-friendly alternative for weed control in wheat crops. Scanning electron microscopy (SEM), X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), and ultraviolet absorbance were used to analyze the developed nanoparticles. The SEM analysis indicated that the average size of the particles was 35 nm, forming clusters with a porous structure. Both nanoparticles of fluroxyper + MCPA exhibited maximal absorption peaks at a wavelength of 320 nm. The compound fluroxyper +MCPA has a strong peak at a 2θ value of 30.55°, which correlates to the 78 plane of the anatase phase. The weeds, including Chenopodium album, Lathyrus aphaca, Angalis arvensis, and Melilotus indica, were sprayed with the nanoparticles while they were in the third or fourth leaf stage. There were seven distinct dosages used: doses (D0 (Check weeds), D1 (Recommended dose of traditional herbicide, D2 (Recommended dose of Nano-herbicide (NPs-H)), D3 (NPs-H with 05-fold lower dose), D4 ((NPs-H) with 10-fold lower dose), D5 (NPs-H with 15-fold lower dose), and D6 (NPs-H with 20-fold lower dose)). The chitosan-based nanoparticles of MCPA at the prescribed dosage of conventional herbicide resulted in complete death and visual damage, with a 100% fatality rate. The dosage that was 5-fold lower exhibited the lowest levels of plant height (3.95 cm), chlorophyll content (5.63%), dry biomass (0.10 g), and fresh biomass (0.33 g) in the broad-leaved weed of wheat. The herbicide nanoparticles, when used at a dosage 10-fold lower than that of conventional herbicides, had a comparable impact on the prescribed dosage. Nano-herbicides have the potential to improve the efficiency of standard herbicides by increasing stability and lowering toxicity.

Keywords: mortality, visual injury, chlorophyl contents, chitosan-based nanoparticles

Procedia PDF Downloads 45
690 Impact of Intelligent Transportation System on Planning, Operation and Safety of Urban Corridor

Authors: Sourabh Jain, S. S. Jain

Abstract:

Intelligent transportation system (ITS) is the application of technologies for developing a user–friendly transportation system to extend the safety and efficiency of urban transportation systems in developing countries. These systems involve vehicles, drivers, passengers, road operators, managers of transport services; all interacting with each other and the surroundings to boost the security and capacity of road systems. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. Intelligent transportation system is a product of the revolution in information and communications technologies that is the hallmark of the digital age. The basic ITS technology is oriented on three main directions: communications, information, integration. Information acquisition (collection), processing, integration, and sorting are the basic activities of ITS. In the paper, attempts have been made to present the endeavor that was made to interpret and evaluate the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of six lanes as well as eight lanes divided road network. Two categories of data have been collected such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, stop watch, radar gun, and mobile GPS (GPS tracker lite). From the analysis, the performance interpretations incorporated were the identification of peak and off-peak hours, congestion and level of service (LOS) at midblock sections and delay followed by plotting the speed contours. The paper proposed the urban corridor management strategies based on sensors integrated into both vehicles and on the roads that those have to be efficiently executable, cost-effective, and familiar to road users. It will be useful to reduce congestion, fuel consumption, and pollution so as to provide comfort, safety, and efficiency to the users.

Keywords: ITS strategies, congestion, planning, mobility, safety

Procedia PDF Downloads 161
689 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model

Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung

Abstract:

The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.

Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation

Procedia PDF Downloads 148
688 Anaerobic Co-digestion of the Halophyte Salicornia Ramosissima and Pig Manure in Lab-Scale Batch and Semi-continuous Stirred Tank Reactors: Biomethane Production and Reactor Performance

Authors: Aadila Cayenne, Hinrich Uellendahl

Abstract:

Optimization of the anaerobic digestion (AD) process of halophytic plants is essential as the biomass contains a high salt content that can inhibit the AD process. Anaerobic co-digestion, together with manure, can resolve the inhibitory effects of saline biomass in order to dilute the salt concentration and establish favorable conditions for the microbial consortia of the AD process. The present laboratory study investigated the co-digestion of S. ramosissima (Sram), and pig manure (PM) in batch and semi-continuous stirred tank reactors (CSTR) under mesophilic (38oC) conditions. The 0.5L batch reactor experiments were in mono- and co-digestion of Sram: PM using different percent volatile solid (VS) based ratios (0:100, 15:85, 25:75, 35:65, 50:50, 100:0) with an inoculum to substate (I/R) ratio of 2. Two 5L CSTR systems (R1 and R2) were operated for 133 days with a feed of PM in a control reactor (R1) and with a co-digestion feed in an increasing Sram VS ratio of Sram: PM of 15:85, 25:75, 35:65 in reactor R2 at an organic loading rate (OLR) of 2 gVS/L/d and hydraulic retention time (HRT) of 20 days. After a start-up phase of 8 weeks for both reactors R1 and R2 with PM feed alone, the halophyte biomass Sram was added to the feed of R2 in an increasing ratio of 15 – 35 %VS Sram over an 11-week period. The process performance was monitored by pH, total solid (TS), VS, total nitrogen (TN), ammonium-nitrogen (NH4 – N), volatile fatty acids (VFA), and biomethane production. In the batch experiments, biomethane yields of 423, 418, 392, 365, 315, and 214 mL-CH4/gVS were achieved for mixtures of 0:100, 15:85, 25:75, 35:65, 50:50, 100:0 %VS Sram: PM, respectively. In the semi-continuous reactor processes, the average biomethane yields were 235, 387, and 365 mL-CH4/gVS for the phase of a co-digestion feed ratio in R2 of 15:85, 25:75, and 35:65 %VS Sram: PM, respectively. The methane yield of PM alone in R1 was in the corresponding phases on average 260, 388, and 446 mL-CH4/gVS. Accordingly, in the continuous AD process, the methane yield of the halophyte Sram was highest at 386 mL-CH4/gVS in the co-digestion ratio of 25:75%VS Sram: PM and significantly lower at 15:85 %VS Sram: PM (100 mL-CH4/gVS) and at 35:65 %VS Sram (214 mL-CH4/gVS). The co-digestion process showed no signs of inhibition at 2 – 4 g/L NH4 – N, 3.5 – 4.5 g/L TN, and total VFA of 0.45 – 2.6 g/L (based on Acetic, Propionic, Butyric and Valeric acid). This study demonstrates that a stable co-digestion process of S. ramosissima and pig manure can be achieved with a feed of 25%VS Sram at HRT of 20 d and OLR of 2 gVS/L/d.

Keywords: anaerobic co-digestion, biomethane production, halophytes, pig manure, salicornia ramosissima

Procedia PDF Downloads 122
687 Impact of CYP3A5 Polymorphism on Tacrolimus to Predict the Optimal Initial Dose Requirements in South Indian Renal Transplant Recipients

Authors: S. Sreeja, Radhakrishnan R. Nair, Noble Gracious, Sreeja S. Nair, M. Radhakrishna Pillai

Abstract:

Background: Tacrolimus is a potent immunosuppressant clinically used for the long term treatment of antirejection of transplanted organs in liver and kidney transplant recipients though dose optimization is poorly managed. However, So far no study has been carried out on the South Indian kidney transplant patients. The objective of this study is to evaluate the potential influence of a functional polymorphism in CYP3A5*3 gene on tacrolimus physiological availability/dose ratio in South Indian renal transplant patients. Materials and Methods: Twenty five renal transplant recipients receiving tacrolimus were enrolled in this study. Their body weight, drug dosage, and therapeutic concentration of Tacrolimus were observed. All patients were on standard immunosuppressive regime of Tacrolimus-Mycophenolate mofetil along with steroids on a starting dose of Tac 0.1 mg/kg/day. CYP3A5 genotyping was performed by PCR followed with RFLP. Conformation of RFLP analysis and variation in the nucleotide sequence of CYP3A5*3 gene were determined by direct sequencing using a validated automated generic analyzer. Results: A significant association was found between tacrolimus per dose/kg/d and CYP3A5 gene (A6986G) polymorphism in the study population. The CYP3A5 *1/*1, *1/*3 and *3/*3 genotypes were detected in 5 (20 %), 5 (20 %) and 15 (60 %) of the 25 graft recipients, respectively. CYP3A5*3 genotypes were found to be a good predictor of tacrolimus Concentration/Dose ratio in kidney transplant recipients. Significantly higher L/D was observed among non-expressors 9.483 ng/mL(4.5- 14.1) as compared with the expressors 5.154 ng/mL (4.42-6.5 ) of CYP3A5. Acute rejection episodes were significantly higher for CYP3A5*1 homozygotes compared to patients with CYP3A5*1/*3 and CYP3A5*3/*3 genotypes (40 % versus 20 % and 13 %, respectively ). The dose normalized TAC concentration (ng/ml/mg/kg) was significantly lower in patients having CYP3A5*1/*3 polymorphism. Conclusion: This is the first study to extensively determine the effect of CYP3A5*3 genetic polymorphism on tacrolimus pharmacokinetics in South Indian renal transplant recipients and also shows that majority of our patients carry mutant allele A6986G in CYP3A5*3 gene. Identification of CYP3A5 polymorphism prior to transplantation could contribute to evaluate the appropriate initial dosage of tacrolimus for each patient.

Keywords: kidney transplant patients, CYP3A5 genotype, tacrolimus, RFLP

Procedia PDF Downloads 279
686 Advancing Circular Economy Principles: Integrating AI Technology in Street Sanitation for Sustainable Urban Development

Authors: Xukai Fu

Abstract:

The concept of circular economy is interdisciplinary, intersecting environmental engineering, information technology, business, and social science domains. Over the course of its 15-year tenure in the sanitation industry, Jinkai has concentrated its efforts in the past five years on integrating artificial intelligence (AI) technology with street sanitation apparatus and systems. This endeavor has led to the development of various innovations, including the Intelligent Identification Sweeper Truck (Intelligent Waste Recognition and Energy-saving Control System), the Intelligent Identification Water Truck (Intelligent Flushing Control System), the intelligent food waste treatment machine, and the Intelligent City Road Sanitation Surveillance Platform. This study will commence with an examination of prevalent global challenges, elucidating how Jinkai effectively addresses each within the framework of circular economy principles. Utilizing a review and analysis of pertinent environmental management data, we will elucidate Jinkai's strategic approach. Following this, we will investigate how Jinkai utilizes the advantages of circular economy principles to guide the design of street sanitation machinery, with a focus on digitalization integration. Moreover, we will scrutinize Jinkai's sustainable practices throughout the invention and operation phases of street sanitation machinery, aligning with the triple bottom line theory. Finally, we will delve into the significance and enduring impact of corporate social responsibility (CSR) and environmental, social, and governance (ESG) initiatives. Special emphasis will be placed on Jinkai's contributions to community stakeholders, with a particular emphasis on human rights. Despite the widespread adoption of circular economy principles across various industries, achieving a harmonious equilibrium between environmental justice and social justice remains a formidable task. Jinkai acknowledges that the mere development of energy-saving technologies is insufficient for authentic circular economy implementation; rather, they serve as instrumental tools. To earnestly promote and embody circular economy principles, companies must consistently prioritize the UN Sustainable Development Goals and adapt their technologies to address the evolving exigencies of our world.

Keywords: circular economy, core principles, benefits, the tripple bottom line, CSR, ESG, social justice, human rights, Jinkai

Procedia PDF Downloads 17
685 Influence of Various Disaster Scenarios Assumption to the Advance Creation of Wide-Area Evacuation Plan Confronting Natural Disasters

Authors: Nemat Mohammadi, Yuki Nakayama

Abstract:

After occurring Great East Japan earthquake and as a consequence the invasion of an extremely large Tsunami to the city, obligated many local governments to take into account certainly these kinds of issues. Poor preparation of local governments to deal with such kinds of disasters at that time and consequently lack of assistance delivery for local residents caused thousands of civilian casualties as well as billion dollars of economic damages. Those local governments who are responsible for governing such coastal areas, have to consider some countermeasures to deal with these natural disasters, prepare a comprehensive evacuation plan and contrive some feasible emergency plans for the purpose of victims’ reduction as much as possible. Under this evacuation plan, the local government should contemplate more about the traffic congestion during wide-area evacuation operation and estimate the minimum essential time to evacuate the whole city completely. This challenge will become more complicated for the government when the people who are affected by disasters are not only limited to the normal informed citizens but also some pregnant women, physically handicapped persons, old age citizens and foreigners or tourists who are not familiar with that conditions as well as local language are involved. The important issue to deal with this challenge is that how to inform these people to take a proper action right away noticing the Tsunami is coming. After overcoming this problem, next significant challenge is even more considerable. Next challenge is to evacuate the whole residents in a short period of time from the threated area to the safer shelters. In fact, most of the citizens will use their own vehicles to evacuate to the designed shelters and some of them will use the shuttle buses which are provided by local governments. The problem will arise when all residents want to escape from the threated area simultaneously and consequently creating a traffic jam on evacuation routes which will cause to prolong the evacuation time. Hence, this research mostly aims to calculate the minimum essential time to evacuate each region inside the threated area and find the evacuation start point for each region separately. This result will help the local government to visualize the situations and conditions during disasters and assist them to reduce the possible traffic jam on evacuation routes and consequently suggesting a comprehensive wide-area evacuation plan during natural disasters.

Keywords: BPR formula, disaster scenarios, evacuation completion time, wide-area evacuation

Procedia PDF Downloads 193
684 Computational Pipeline for Lynch Syndrome Detection: Integrating Alignment, Variant Calling, and Annotations

Authors: Rofida Gamal, Mostafa Mohammed, Mariam Adel, Marwa Gamal, Marwa kamal, Ayat Saber, Maha Mamdouh, Amira Emad, Mai Ramadan

Abstract:

Lynch Syndrome is an inherited genetic condition associated with an increased risk of colorectal and other cancers. Detecting Lynch Syndrome in individuals is crucial for early intervention and preventive measures. This study proposes a computational pipeline for Lynch Syndrome detection by integrating alignment, variant calling, and annotation. The pipeline leverages popular tools such as FastQC, Trimmomatic, BWA, bcftools, and ANNOVAR to process the input FASTQ file, perform quality trimming, align reads to the reference genome, call variants, and annotate them. It is believed that the computational pipeline was applied to a dataset of Lynch Syndrome cases, and its performance was evaluated. It is believed that the quality check step ensured the integrity of the sequencing data, while the trimming process is thought to have removed low-quality bases and adaptors. In the alignment step, it is believed that the reads were accurately mapped to the reference genome, and the subsequent variant calling step is believed to have identified potential genetic variants. The annotation step is believed to have provided functional insights into the detected variants, including their effects on known Lynch Syndrome-associated genes. The results obtained from the pipeline revealed Lynch Syndrome-related positions in the genome, providing valuable information for further investigation and clinical decision-making. The pipeline's effectiveness was demonstrated through its ability to streamline the analysis workflow and identify potential genetic markers associated with Lynch Syndrome. It is believed that the computational pipeline presents a comprehensive and efficient approach to Lynch Syndrome detection, contributing to early diagnosis and intervention. The modularity and flexibility of the pipeline are believed to enable customization and adaptation to various datasets and research settings. Further optimization and validation are believed to be necessary to enhance performance and applicability across diverse populations.

Keywords: Lynch Syndrome, computational pipeline, alignment, variant calling, annotation, genetic markers

Procedia PDF Downloads 51
683 FEM and Experimental Modal Analysis of Computer Mount

Authors: Vishwajit Ghatge, David Looper

Abstract:

Over the last few decades, oilfield service rolling equipment has significantly increased in weight, primarily because of emissions regulations, which require larger/heavier engines, larger cooling systems, and emissions after-treatment systems, in some cases, etc. Larger engines cause more vibration and shock loads, leading to failure of electronics and control systems. If the vibrating frequency of the engine matches the system frequency, high resonance is observed on structural parts and mounts. One such existing automated control equipment system comprising wire rope mounts used for mounting computers was designed approximately 12 years ago. This includes the use of an industrial- grade computer to control the system operation. The original computer had a smaller, lighter enclosure. After a few years, a newer computer version was introduced, which was 10 lbm heavier. Some failures of internal computer parts have been documented for cases in which the old mounts were used. Because of the added weight, there is a possibility of having the two brackets impact each other under off-road conditions, which causes a high shock input to the computer parts. This added failure mode requires validating the existing mount design to suit the new heavy-weight computer. This paper discusses the modal finite element method (FEM) analysis and experimental modal analysis conducted to study the effects of vibration on the wire rope mounts and the computer. The existing mount was modelled in ANSYS software, and resultant mode shapes and frequencies were obtained. The experimental modal analysis was conducted, and actual frequency responses were observed and recorded. Results clearly revealed that at resonance frequency, the brackets were colliding and potentially causing damage to computer parts. To solve this issue, spring mounts of different stiffness were modeled in ANSYS software, and the resonant frequency was determined. Increasing the stiffness of the system increased the resonant frequency zone away from the frequency window at which the engine showed heavy vibrations or resonance. After multiple iterations in ANSYS software, the stiffness of the spring mount was finalized, which was again experimentally validated.

Keywords: experimental modal analysis, FEM Modal Analysis, frequency, modal analysis, resonance, vibration

Procedia PDF Downloads 304
682 Treatment of Municipal Wastewater by Means of Uv-Assisted Irradiation Technologies: Fouling Studies and Optimization of Operational Parameters

Authors: Tooba Aslam, Efthalia Chatzisymeon

Abstract:

UV-assisted irradiation technologies are well-established for water and wastewater treatment. UVC treatments are widely used at large-scale, while UVA irradiation has more often been applied in combination with a catalyst (e.g. TiO₂ or FeSO₄) in smaller-scale systems. A technical issue of these systems is the formation of fouling on the quartz sleeves that houses the lamps. This fouling can prevent complete irradiation, therefore reducing the efficiency of the process. This paper investigates the effects of operational parameters, such as the type of wastewater, irradiation source, H₂O₂ addition, and water pH on fouling formation and, ultimately, the treatment of municipal wastewater. Batch experiments have been performed at lab-scale while monitoring water quality parameters including: COD, TS, TSS, TDS, temperature, pH, hardness, alkalinity, turbidity, TOC, UV transmission, UV₂₅₄ absorbance, and metal concentrations. The residence time of the wastewater in the reactor was 5 days in order to observe any fouling formation on the quartz surface. Over this period, it was observed that chemical oxygen demand (COD) decreased by 30% and 59% during photolysis (Ultraviolet A) and photo-catalysis (UVA/Fe/H₂O₂), respectively. Higher fouling formation was observed with iron-rich and phosphorous-rich wastewater. The highest rate of fouling was developed with phosphorous-rich wastewater, followed by the iron-rich wastewater. Photo-catalysis (UVA/Fe/H₂O₂) had better removal efficiency than photolysis (UVA). This was attributed to the Photo-Fenton reaction, which was initiated under these operational conditions. Scanning electron microscope (SEM) measurements of fouling formed on the quartz sleeves showed that particles vary in size, shape, and structure; some have more distinct structures and are generally larger and have less compact structure than the others. Energy-dispersive X-ray spectroscopy (EDX) results showed that the major metals present in the fouling cake were iron, phosphorous, and calcium. In conclusion, iron-rich wastewaters are more suitable for UV-assisted treatment since fouling formation on quartz sleeves can be minimized by the formation of oxidizing agents during treatment, such as hydroxyl radicals.

Keywords: advanced oxidation processes, photo-fenton treatment, photo-catalysis, wastewater treatment

Procedia PDF Downloads 56
681 Detection of Abnormal Process Behavior in Copper Solvent Extraction by Principal Component Analysis

Authors: Kirill Filianin, Satu-Pia Reinikainen, Tuomo Sainio

Abstract:

Frequent measurements of product steam quality create a data overload that becomes more and more difficult to handle. In the current study, plant history data with multiple variables was successfully treated by principal component analysis to detect abnormal process behavior, particularly, in copper solvent extraction. The multivariate model is based on the concentration levels of main process metals recorded by the industrial on-stream x-ray fluorescence analyzer. After mean-centering and normalization of concentration data set, two-dimensional multivariate model under principal component analysis algorithm was constructed. Normal operating conditions were defined through control limits that were assigned to squared score values on x-axis and to residual values on y-axis. 80 percent of the data set were taken as the training set and the multivariate model was tested with the remaining 20 percent of data. Model testing showed successful application of control limits to detect abnormal behavior of copper solvent extraction process as early warnings. Compared to the conventional techniques of analyzing one variable at a time, the proposed model allows to detect on-line a process failure using information from all process variables simultaneously. Complex industrial equipment combined with advanced mathematical tools may be used for on-line monitoring both of process streams’ composition and final product quality. Defining normal operating conditions of the process supports reliable decision making in a process control room. Thus, industrial x-ray fluorescence analyzers equipped with integrated data processing toolbox allows more flexibility in copper plant operation. The additional multivariate process control and monitoring procedures are recommended to apply separately for the major components and for the impurities. Principal component analysis may be utilized not only in control of major elements’ content in process streams, but also for continuous monitoring of plant feed. The proposed approach has a potential in on-line instrumentation providing fast, robust and cheap application with automation abilities.

Keywords: abnormal process behavior, failure detection, principal component analysis, solvent extraction

Procedia PDF Downloads 288
680 KPI and Tool for the Evaluation of Competency in Warehouse Management for Furniture Business

Authors: Kritchakhris Na-Wattanaprasert

Abstract:

The objective of this research is to design and develop a prototype of a key performance indicator system this is suitable for warehouse management in a case study and use requirement. In this study, we design a prototype of key performance indicator system (KPI) for warehouse case study of furniture business by methodology in step of identify scope of the research and study related papers, gather necessary data and users requirement, develop key performance indicator base on balance scorecard, design pro and database for key performance indicator, coding the program and set relationship of database and finally testing and debugging each module. This study use Balance Scorecard (BSC) for selecting and grouping key performance indicator. The system developed by using Microsoft SQL Server 2010 is used to create the system database. In regard to visual-programming language, Microsoft Visual C# 2010 is chosen as the graphic user interface development tool. This system consists of six main menus: menu login, menu main data, menu financial perspective, menu customer perspective, menu internal, and menu learning and growth perspective. Each menu consists of key performance indicator form. Each form contains a data import section, a data input section, a data searches – edit section, and a report section. The system generates outputs in 5 main reports, the KPI detail reports, KPI summary report, KPI graph report, benchmarking summary report and benchmarking graph report. The user will select the condition of the report and period time. As the system has been developed and tested, discovers that it is one of the ways to judging the extent to warehouse objectives had been achieved. Moreover, it encourages the warehouse functional proceed with more efficiency. In order to be useful propose for other industries, can adjust this system appropriately. To increase the usefulness of the key performance indicator system, the recommendations for further development are as follows: -The warehouse should review the target value and set the better suitable target periodically under the situation fluctuated in the future. -The warehouse should review the key performance indicators and set the better suitable key performance indicators periodically under the situation fluctuated in the future for increasing competitiveness and take advantage of new opportunities.

Keywords: key performance indicator, warehouse management, warehouse operation, logistics management

Procedia PDF Downloads 416
679 An Electrochemical Enzymatic Biosensor Based on Multi-Walled Carbon Nanotubes and Poly (3,4 Ethylenedioxythiophene) Nanocomposites for Organophosphate Detection

Authors: Navpreet Kaur, Himkusha Thakur, Nirmal Prabhakar

Abstract:

The most controversial issue in crop production is the use of Organophosphate insecticides. This is evident in many reports that Organophosphate (OP) insecticides, among the broad range of pesticides are mainly involved in acute and chronic poisoning cases. OPs detection is of crucial importance for health protection, food and environmental safety. In our study, a nanocomposite of poly (3,4 ethylenedioxythiophene) (PEDOT) and multi-walled carbon nanotubes (MWCNTs) has been deposited electrochemically onto the surface of fluorine doped tin oxide sheets (FTO) for the analysis of malathion OP. The -COOH functionalization of MWCNTs has been done for the covalent binding with amino groups of AChE enzyme. The use of PEDOT-MWCNT films exhibited an excellent conductivity, enables fast transfer kinetics and provided a favourable biocompatible microenvironment for AChE, for the significant malathion OP detection. The prepared biosensors were characterized by Fourier transform infrared spectrometry (FTIR), Field emission-scanning electron microscopy (FE-SEM) and electrochemical studies. Various optimization studies were done for different parameters including pH (7.5), AChE concentration (50 mU), substrate concentration (0.3 mM) and inhibition time (10 min). Substrate kinetics has been performed and studied for the determination of Michaelis Menten constant. The detection limit for malathion OP was calculated to be 1 fM within the linear range 1 fM to 1 µM. The activity of inhibited AChE enzyme was restored to 98% of its original value by 2-pyridine aldoxime methiodide (2-PAM) (5 mM) treatment for 11 min. The oxime 2-PAM is able to remove malathion from the active site of AChE by means of trans-esterification reaction. The storage stability and reusability of the prepared biosensor is observed to be 30 days and seven times, respectively. The application of the developed biosensor has also been evaluated for spiked lettuce sample. Recoveries of malathion from the spiked lettuce sample ranged between 96-98%. The low detection limit obtained by the developed biosensor made them reliable, sensitive and a low cost process.

Keywords: PEDOT-MWCNT, malathion, organophosphates, acetylcholinesterase, biosensor, oxime (2-PAM)

Procedia PDF Downloads 430
678 Research on Configuration of Large-Scale Linear Array Feeder Truss Parabolic Cylindrical Antenna of Satellite

Authors: Chen Chuanzhi, Guo Yunyun

Abstract:

The large linear array feeding parabolic cylindrical antenna of the satellite has the ability of large-area line focusing, multi-directional beam clusters simultaneously in a certain azimuth plane and elevation plane, corresponding quickly to different orientations and different directions in a wide frequency range, dual aiming of frequency and direction, and combining space power. Therefore, the large-diameter parabolic cylindrical antenna has become one of the new development directions of spaceborne antennas. Limited by the size of the rocked fairing, the large-diameter spaceborne antenna is required to be small mass and have a deployment function. After being orbited, the antenna can be deployed by expanding and be stabilized. However, few types of structures can be used to construct large cylindrical shell structures in existing structures, which greatly limits the development and application of such antennas. Aiming at high structural efficiency, the geometrical characteristics of parabolic cylinders and mechanism topological mapping law to the expandable truss are studied, and the basic configuration of deployable truss with cylindrical shell is structured. Then a modular truss parabolic cylindrical antenna is designed in this paper. The antenna has the characteristics of stable structure, high precision of reflecting surface formation, controllable motion process, high storage rate, and lightweight, etc. On the basis of the overall configuration comprehensive theory and optimization method, the structural stiffness of the modular truss parabolic cylindrical antenna is improved. And the bearing density and impact resistance of support structure are improved based on the internal tension optimal distribution method of reflector forming. Finally, a truss-type cylindrical deployable support structure with high constriction-deployment ratio, high stiffness, controllable deployment, and low mass is successfully developed, laying the foundation for the application of large-diameter parabolic cylindrical antennas in satellite antennas.

Keywords: linear array feed antenna, truss type, parabolic cylindrical antenna, spaceborne antenna

Procedia PDF Downloads 131
677 Energy Reclamation in Micro Cavitating Flow

Authors: Morteza Ghorbani, Reza Ghorbani

Abstract:

Cavitation phenomenon has attracted much attention in the mechanical and biomedical technologies. Despite the simplicity and mostly low cost of the devices generating cavitation bubbles, the physics behind the generation and collapse of these bubbles particularly in micro/nano scale has still not well understood. In the chemical industry, micro/nano bubble generation is expected to be applicable to the development of porous materials such as microcellular plastic foams. Moreover, it was demonstrated that the presence of micro/nano bubbles on a surface reduced the adsorption of proteins. Thus, the micro/nano bubbles could act as antifouling agents. Micro and nano bubbles were also employed in water purification, froth floatation, even in sonofusion, which was not completely validated. Small bubbles could also be generated using micro scale hydrodynamic cavitation. In this study, compared to the studies available in the literature, we are proposing a novel approach in micro scale utilizing the energy produced during the interaction of the spray affected by the hydrodynamic cavitating flow and a thin aluminum plate. With a decrease in the size, cavitation effects become significant. It is clearly shown that with the aid of hydrodynamic cavitation generated inside the micro/mini-channels in addition to the optimization of the distance between the tip of the microchannel configuration and the solid surface, surface temperatures can be increased up to 50C under the conditions of this study. The temperature rise on the surfaces near the collapsing small bubbles was exploited for energy harvesting in small scale, in such a way that miniature, cost-effective, and environmentally friendly energy-harvesting devices can be developed. Such devices will not require any external power and moving parts in contrast to common energy-harvesting devices, such as those involving piezoelectric materials and micro engine. Energy harvesting from thermal energy has been widely exploited to achieve energy savings and clean technologies. We are proposing a cost effective and environmentally friendly solution for the growing individual energy needs thanks to the energy application of cavitating flows. The necessary power for consumer devices, such as cell phones and laptops, can be provided using this approach. Thus, this approach has the potential for solving personal energy needs in an inexpensive and environmentally friendly manner and can trigger a shift of paradigm in energy harvesting.

Keywords: cavitation, energy, harvesting, micro scale

Procedia PDF Downloads 173
676 Dynamic Analysis and Clutch Adaptive Prefill in Dual Clutch Transmission

Authors: Bin Zhou, Tongli Lu, Jianwu Zhang, Hongtao Hao

Abstract:

Dual clutch transmissions (DCT) offer a high comfort performance in terms of the gearshift. Hydraulic multi-disk clutches are the key components of DCT, its engagement determines the shifting comfort. The prefill of the clutches requests an initial engagement which the clutches just contact against each other but not transmit substantial torque from the engine, this initial clutch engagement point is called the touch point. Open-loop control is typically implemented for the clutch prefill, a lot of uncertainties, such as oil temperature and clutch wear, significantly affects the prefill, probably resulting in an inappropriate touch point. Underfill causes the engine flaring in gearshift while overfill arises clutch tying up, both deteriorating the shifting comfort of DCT. Therefore, it is important to enable an adaptive capacity for the clutch prefills regarding the uncertainties. In this paper, a dynamic model of the hydraulic actuator system is presented, including the variable force solenoid and clutch piston, and validated by a test. Subsequently, the open-loop clutch prefill is simulated based on the proposed model. Two control parameters of the prefill, fast fill time and stable fill pressure is analyzed with regard to the impact on the prefill. The former has great effects on the pressure transients, the latter directly influences the touch point. Finally, an adaptive method is proposed for the clutch prefill during gear shifting, in which clutch fill control parameters are adjusted adaptively and continually. The adaptive strategy is changing the stable fill pressure according to the current clutch slip during a gearshift, improving the next prefill process. The stable fill pressure is increased by means of the clutch slip while underfill and decreased with a constant value for overfill. The entire strategy is designed in the Simulink/Stateflow, and implemented in the transmission control unit with optimization. Road vehicle test results have shown the strategy realized its adaptive capability and proven it improves the shifting comfort.

Keywords: clutch prefill, clutch slip, dual clutch transmission, touch point, variable force solenoid

Procedia PDF Downloads 293
675 The Maps of Meaning (MoM) Consciousness Theory

Authors: Scott Andersen

Abstract:

Perhaps simply and rather unadornedly, consciousness is having multiple goals for action and the continuously adjudication of such goals to implement action, referred to as the Maps of Meaning (MoM) Consciousness Theory. The MoM theory triangulates through three parallel corollaries, action (behavior), mechanism (morphology/pathophysiology), and goals (teleology). (1) An organism’s consciousness contains a fluid, nested goals. These goals are not intentionality, but intersectionality, embodiment meeting the world. i.e., Darwinian inclusive fitness or randomization, then survival of the fittest. These goals form via gradual descent under inclusive fitness, the goals being the abstraction of a ‘match’ between the evolutionary environment and organism. Human consciousness implements the brain efficiency hypothesis, genetics, epigenetics, and experience crystallize efficiencies, not necessitating best or objective but fitness, i.e., perceived efficiency based on one’s adaptive environment. These efficiencies are objectively arbitrary, but determine the operation and level of one’s consciousness, termed extreme thrownness. Since inclusive fitness drives efficiencies in physiologic mechanism, morphology and behavior (action) and originates one’s goals, embodiment is necessarily entangled to human consciousness as its the intersection of mechanism or action (both necessitating embodiment) occurring in the world that determines fitness. Perception is the operant process of consciousness and is the consciousness’ de facto goal adjudication process. Goal operationalization is fundamentally efficiency-based via one’s unique neuronal mapping as a byproduct of genetics, epigenetics, and experience. Perception involves information intake and information discrimination, equally underpinned by efficiencies of inclusive fitness via extreme thrownness. Perception isn’t a ‘frame rate,’ but Bayesian priors of efficiency based on one’s extreme thrownness. Consciousness and human consciousness is a modular (i.e., a scalar level of richness, which builds up like building blocks) and dimensionalized (i.e., cognitive abilities become possibilities as emergent phenomena at various modularities, like stratified factors in factor analysis). The meta dimensions of human consciousness seemingly include intelligence quotient, personality (five-factor model), richness of perception intake, and richness of perception discrimination, among other potentialities. Future consciousness research should utilize factor analysis to parse modularities and dimensions of human consciousness and animal models.

Keywords: consciousness, perception, prospection, embodiment

Procedia PDF Downloads 22
674 An Approach to Determine Proper Daylighting Design Solution Considering Visual Comfort and Lighting Energy Efficiency in High-Rise Residential Building

Authors: Zehra Aybike Kılıç, Alpin Köknel Yener

Abstract:

Daylight is a powerful driver in terms of improving human health, enhancing productivity and creating sustainable solutions by minimizing energy demand. A proper daylighting system allows not only a pleasant and attractive visual and thermal environment, but also reduces lighting energy consumption and heating/cooling energy load with the optimization of aperture size, glazing type and solar control strategy, which are the major design parameters of daylighting system design. Particularly, in high-rise buildings where large openings that allow maximum daylight and view out are preferred, evaluation of daylight performance by considering the major parameters of the building envelope design becomes crucial in terms of ensuring occupants’ comfort and improving energy efficiency. Moreover, it is increasingly necessary to examine the daylighting design of high-rise residential buildings, considering the share of residential buildings in the construction sector, the duration of occupation and the changing space requirements. This study aims to identify a proper daylighting design solution considering window area, glazing type and solar control strategy for a high-residential building in terms of visual comfort and lighting energy efficiency. The dynamic simulations are carried out/conducted by DIVA for Rhino version 4.1.0.12. The results are evaluated with Daylight Autonomy (DA) to demonstrate daylight availability in the space and Daylight Glare Probability (DGP) to describe the visual comfort conditions related to glare. Furthermore, it is also analyzed that the lighting energy consumption occurred in each scenario to determine the optimum solution reducing lighting energy consumption by optimizing daylight performance. The results revealed that it is only possible that reduction in lighting energy consumption as well as providing visual comfort conditions in buildings with the proper daylighting design decision regarding glazing type, transparency ratio and solar control device.

Keywords: daylighting , glazing type, lighting energy efficiency, residential building, solar control strategy, visual comfort

Procedia PDF Downloads 159
673 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 52
672 Effective Apixaban Clearance with Cytosorb Extracorporeal Hemoadsorption

Authors: Klazina T. Havinga, Hilde R. H. de Geus

Abstract:

Introduction: Pre-operative coagulation management of Apixaban prescribed patients, a new oral anticoagulant (a factor Xa inhibitor), is difficult, especially when chronic kidney disease (CKD) causes drug overdose. Apixaban is not dialyzable due to its high level of protein binding. An antidote, Andexanet α, is available but expensive and has an unfavorable short half-life. We report the successful extracorporeal removal of Apixaban prior to emergency surgery with the CytoSorb® Hemoadsorption device. Methods: A 89-year-old woman with CKD, with an Apixaban prescription for atrial fibrillation, was presented at the ER with traumatic rib fractures, a flail chest, and an unstable spinal fracture (T12) for which emergency surgery was indicated. However, due to very high Apixaban levels, this surgery had to be postponed. Based on the Apixaban-specific anti-factor Xa activity (AFXaA) measurements at admission and 10 hours later, complete clearance was expected after 48 hours. In order to enhance the Apixaban removal and reduce the time to operation, and therefore reduce pulmonary complications, CRRT with CytoSorb® cartridge was initiated. Apixaban-specific anti-factor Xa activity (AFXaA) was measured frequently as a substitute for Apixaban drug concentrations, pre- and post adsorber, in order to calculate the adsorber-related clearance. Results: The admission AFXaA concentration, as a substitute for Apixaban drug levels, was 218 ng/ml, which decreased to 157 ng/ml after ten hours. Due to sustained anticoagulation effects, surgery was again postponed. However, the AFXaA levels decreased quickly to sub-therapeutic levels after CRRT (Multifiltrate Pro, Fresenius Medical Care, Blood flow 200 ml/min, Dialysate Flow 4000 ml/h, Prescribed renal dose 51 ml-kg-h) with Cytosorb® connected in series into the circuit was initiated (within 5 hours). The adsorber-related (indirect) Apixaban clearance was calculated every half hour (Cl=Qe * (AFXaA pre- AFXaA post/ AFXaA pre) with Qe=plasma flow rate calculated with Ht=0.38 and system blood flow rate 200 ml-min): 100 ml/min, 72 ml/min and 57 ml/min. Although, as expected, the adsorber-related clearance decreased quickly due to saturation of the beads, still the reduction rate achieved resulted in a very rapid decrease in AFXaA levels. Surgery was ordered and possible within 5 hours after Cytosorb initiation. Conclusion: The CytoSorb® Hemoadsorption device enabled rapid correction of Apixaban associated anticoagulation.

Keywords: Apixaban, CytoSorb, emergency surgery, Hemoadsorption

Procedia PDF Downloads 123
671 Fuzzy Decision Making to the Construction Project Management: Glass Facade Selection

Authors: Katarina Rogulj, Ivana Racetin, Jelena Kilic

Abstract:

In this study, the fuzzy logic approach (FLA) was developed for construction project management (CPM) under uncertainty and duality. The focus was on decision making in selecting the type of the glass facade for a residential-commercial building in the main design. The adoption of fuzzy sets was capable of reflecting construction managers’ reliability level over subjective judgments, and thus the robustness of the system can be achieved. An α-cuts method was utilized for discretizing the fuzzy sets in FLA. This method can communicate all uncertain information in the optimization process, taking into account the values of this information. Furthermore, FLA provides in-depth analyses of diverse policy scenarios that are related to various levels of economic aspects when it comes to the construction projects' valid decision making. The developed approach is applied to CPM to demonstrate its applicability. Analyzing the materials of glass facades, variants were defined. The development of the FLA for the CPM included relevant construction projec'ts stakeholders that were involved in the criteria definition to evaluate each variant. Using fuzzy Decision-Making Trial and Evaluation Laboratory Method (DEMATEL) comparison of the glass facade was conducted. This way, a rank, according to the priorities for inclusion into the main design, of variants is obtained. The concept was tested on a residential-commercial building in the city of Rijeka, Croatia. The newly developed methodology was then compared with the existing one. The aim of the research was to define an approach that will improve current judgments and decisions when it comes to the material selection of buildings facade as one of the most important architectural and engineering tasks in the main design. The advantage of the new methodology compared to the old one is that it includes the subjective side of the managers’ decisions, as an inevitable factor in each decision making. The proposed approach can help construction projects managers to identify the desired type of glass facade according to their preference and practical conditions, as well as facilitate in-depth analyses of tradeoffs between economic efficiency and architectural design.

Keywords: construction projects management, DEMATEL, fuzzy logic approach, glass façade selection

Procedia PDF Downloads 115
670 Maturity Level of Knowledge Management in Whole Life Costing in the UK Construction Industry: An Empirical Study

Authors: Ndibarefinia Tobin

Abstract:

The UK construction industry has been under pressure for many years to produce economical buildings which offer value for money, not only during the construction phase, but more importantly, during the full life of the building. Whole life costing is considered as an economic analysis tool that takes into account the total investment cost in and ownership, operation and subsequent disposal of a product or system to which the whole life costing method is being applied. In spite of its importance, the practice is still crippled by the lack of tangible evidence, ‘know-how’ skills and knowledge of the practice i.e. the lack of professionals with the knowledge and training on the use of the practice in construction project, this situation is compounded by the absence of available data on whole life costing from relevant projects, lack of data collection mechanisms and so on. The aforementioned problems has forced many construction organisations to adopt project enhancement initiatives to boost their performance on the use of whole life costing techniques so as to produce economical buildings which offer value for money during the construction stage also the whole life of the building/asset. The management of knowledge in whole life costing is considered as one of the many project enhancement initiative and it is becoming imperative in the performance and sustainability of an organisation. Procuring building projects using whole life costing technique is heavily reliant on the knowledge, experience, ideas and skills of workers, which comes from many sources including other individuals, electronic media and documents. Due to the diversity of knowledge, capabilities and skills of employees that vary across an organisation, it is significant that they are directed and coordinated efficiently so as to capture, retrieve and share knowledge in order to improve the performance of the organisation. The implementation of knowledge management concept has different levels in each organisation. Measuring the maturity level of knowledge management in whole life costing practice will paint a comprehensible picture of how knowledge is managed in construction organisations. Purpose: The purpose of this study is to identify knowledge management maturity in UK construction organisations adopting whole life costing in construction project. Design/methodology/approach: This study adopted a survey method and conducted by distributing questionnaires to large construction companies that implement knowledge management activities in whole life costing practice in construction project. Four level of knowledge management maturity was proposed on this study. Findings: From the results obtained in the study shows that 34 contractors at the practiced level, 26 contractors at managed level and 12 contractors at continuously improved level.

Keywords: knowledge management, whole life costing, construction industry, knowledge

Procedia PDF Downloads 232
669 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink

Authors: Sanjay Rathee, Arti Kashyap

Abstract:

Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.

Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining

Procedia PDF Downloads 267