Search results for: aggregation operator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 704

Search results for: aggregation operator

554 Colour Change and melenophores response in ateleost: Balantiochilous melenopterus (Bleeker) with Certain Chemicals and Drugs

Authors: Trapti Pathak

Abstract:

Fishes can change their body colour according to their surroundings by. They do so by either aggregation or dispersion of melanosomes within the skin. These movements can regulate by means of sympathetic nerves with the help of cytoskeleton. Employing the melanophores on isolated scales of the fingerling of teleost fish, it is attempted to characterise the concerned nerves and the receptors located on melenocytes along with implication of microtubules a part of cytoskeleton in the pigmentary translocation in the fish. The scales from dorso-lateral trunk of the fish represented the sympathetic– neuromelanophore preparations which were stimulated by chemical means, such as adrenergic agonist, antagonist and the microtubule-disrupting drugs such as yuhombine, dopamine, colchicine etc. Adrenaline is an adrenergic agonist which is strongly induced the dorse-dependent concentration of pigment in innervated melanophores while Yohimbine is an adrenergic antagonist which is known to block effectively the α2-adrenoceptors inhibited the action of adrenaline. Colchicine effectively interferes with melanosome aggregating action of adrenaline. From these results it is concluded that the chromatic fibres of adrenergic nature innervate the melanophores and these cells do possess α2-adrenoceptors which mediate the melanosome aggregation and the movements of pigment granules through microtubules means of transport within the cell. These movements of pigment are linked to paling or darkening achieved of teleost fish respectively when they approach to their background.

Keywords: melenophores, agonists, antagonist, colour change

Procedia PDF Downloads 35
553 Factors Affecting Visual Environment in Mine Lighting

Authors: N. Lakshmipathy, Ch. S. N. Murthy, M. Aruna

Abstract:

The design of lighting systems for surface mines is not an easy task because of the unique environment and work procedures encountered in the mines. The primary objective of this paper is to identify the major problems encountered in mine lighting application and to provide guidance in the solution of these problems. In the surface mining reflectance of surrounding surfaces is one of the important factors, which improve the vision, in the night hours. But due to typical working nature in the mines it is very difficult to fulfill these requirements, and also the orientation of the light at work site is a challenging task. Due to this reason machine operator and other workers in a mine need to be able to orient themselves in a difficult visual environment. The haul roads always keep on changing to tune with the mining activity. Other critical area such as dumpyards, stackyards etc. also change their phase with time, and it is difficult to illuminate such areas. Mining is a hazardous occupation, with workers exposed to adverse conditions; apart from the need for hard physical labor, there is exposure to stress and environmental pollutants like dust, noise, heat, vibration, poor illumination, radiation, etc. Visibility is restricted when operating load haul dumper and Heavy Earth Moving Machinery (HEMM) vehicles resulting in a number of serious accidents. one of the leading causes of these accidents is the inability of the equipment operator to see clearly people, objects or hazards around the machine. Results indicate blind spots are caused primarily by posts, the back of the operator's cab, and by lights and light brackets. The careful designed and implemented, lighting systems provide mine workers improved visibility and contribute to improved safety, productivity and morale. Properly designed lighting systems can improve visibility and safety during working in the opencast mines.

Keywords: contrast, efficacy, illuminance, illumination, light, luminaire, luminance, reflectance, visibility

Procedia PDF Downloads 324
552 Negativization: A Focus Strategy in Basà Language

Authors: Imoh Philip

Abstract:

Basà language is classified as belonging to Kainji family, under the sub-phylum Western-Kainji known as Rubasa (Basa Benue) (Croizier & Blench, 1992:32). Basà is an under-described language spoken in the North-Central Nigeria. The language is characterized by subject-verb-object (henceforth SVO) as its canonical word order. Data for this work is sourced from the researcher’s native intuition of the language corroborated with a careful observation of native speakers. This paper investigates the syntactic derivational strategy of information-structure encoding in Basà language. It emphasizes on a negative operator, as a strategy for focusing a constituent or clause that follows it and negativizes a whole proposition. For items that are not nouns, they have to undergo an obligatory nominalization process, either by affixation, modification or conversion before they are moved to the pre verbal position for these operations. The study discovers and provides evidence of the fact showing that deferent constituents in the sentence such as the subject, direct, indirect object, genitive, verb phrase, prepositional phrase, clause and idiophone, etc. can be focused with the same negativizing operator. The process is characterized by focusing the pre verbal NP constituent alone, whereas the whole proposition is negated. The study can stimulate similar study or be replicated in other languages.

Keywords: negation, focus, Basà, nominalization

Procedia PDF Downloads 559
551 Study of the Late Phase of Core Degradation during Reflooding by Safety Injection System for VVER1000 with ASTECv2 Computer Code

Authors: Antoaneta Stefanova, Rositsa Gencheva, Pavlin Groudev

Abstract:

This paper presents the modeling approach in SBO sequence for VVER 1000 reactors and describes the reactor core behavior at late in-vessel phase in case of late reflooding by HPIS and gives preliminary results for the ASTECv2 validation. The work is focused on investigation of plant behavior during total loss of power and the operator actions. The main goal of these analyses is to assess the phenomena arising during the Station blackout (SBO) followed by primary side high pressure injection system (HPIS) reflooding of already damaged reactor core at very late ‘in-vessel’ phase. The purpose of the analysis is to define how the later HPIS switching on can delay the time of vessel failure or possibly avoid vessel failure. For this purpose has been simulated an SBO scenario with injection of cold water by a high pressure pump (HPP) in cold leg at different stages of core degradation. The times for HPP injection were chosen based on previously performed investigations.

Keywords: VVER, operator action validation, reflooding of overheated reactor core, ASTEC computer code

Procedia PDF Downloads 381
550 Non-Standard Forms of Reporting Domestic Violence: Analysis of the Phenomenon in the Perception of Operators of the Polish Emergency Number 112 and Polish Society

Authors: Joanna Kufel-Orlowska

Abstract:

Domestic violence is a social threat to public safety and order. It poses a threat not only to the family members of the perpetrator but also disturbs the functioning of society and even the state. In a situation of danger, an individual either defends himself or/and calls for help by contacting an appropriate institution whose aim is to ensure civil security. Most often, such contact takes place through a telephone conversation, which is aimed at diagnosing the problem and prompt intervention. People in different situations and in different ways, despite the general reporting standards, try to inform about the need for help. The article aims to present the results of research on non-standard forms of reporting domestic violence in the opinion of the Polish society and operators of the Polish emergency number 112 (911). The research was conducted in the form of a survey technique on a sample of 160 operators (purposeful selection) and 300 people living in Poland (random selection). The research was conducted in the form of online surveys. The study found that in Poland: 1. emergency number operators often receive reports of domestic violence although they are not always able to diagnose whether the case is strictly about violence; 2. non-standard reports of domestic violence are received by about 30% of emergency number operators. Non-standard should be understood as reports of violence that deviate from the norm, are unusual, or are reported by a non-victim. 3. The most common forms of reporting violence not directly are: pretending to talk to a friend, calling a cab, making an appointment with a dentist/doctor, calling a store and helping with the selection of goods, asking about the bank's hotline, not speaking (in order for the emergency number operator to hear what is going on). 4. Emergency number operators in Poland are properly trained and are able to recognize the threatening situation of the reporting party and conduct the conversation in a safe manner for the reporting party. On the other hand, Polish people support the ability to report violence in a non-standard way and would do so themselves in the event of a threat to their own life, health, or property, thus expecting the emergency number operator to recognize a report and help us.

Keywords: domestic violence, operator of the emergency number 112 (911), emergency call center, reporting domestic violence

Procedia PDF Downloads 61
549 Surprise Fraudsters Before They Surprise You: A South African Telecommunications Case Study

Authors: Ansoné Human, Nantes Kirsten, Tanja Verster, Willem D. Schutte

Abstract:

Every year the telecommunications industry suffers huge losses due to fraud. Mobile fraud, or generally, telecommunications fraud is the utilisation of telecommunication products or services to acquire money illegally from or failing to pay a telecommunication company. A South African telecommunication operator developed two internal fraud scorecards to mitigate future risks of application fraud events. The scorecards aim to predict the likelihood of an application being fraudulent and surprise fraudsters before they surprise the telecommunication operator by identifying fraud at the time of application. The scorecards are utilised in the vetting process to evaluate the applicant in terms of the fraud risk the applicant would present to the telecommunication operator. Telecommunication providers can utilise these scorecards to profile customers, as well as isolate fraudulent and/or high-risk applicants. We provide the complete methodology utilised in the development of the scorecards. Furthermore, a Determination and Discrimination (DD) ratio is provided in the methodology to select the most influential variables from a group of related variables. Throughout the development of these scorecards, the following was revealed regarding fraudulent cases and fraudster behaviour within the telecommunications industry: Fraudsters typically target high-value handsets. Furthermore, debit order dates scheduled for the end of the month have the highest fraud probability. The fraudsters target specific stores. Applicants who acquire an expensive package and receive a medium-income, as well as applicants who obtain an expensive package and receive a high income, have higher fraud percentages. If one month prior to application, the status of an account is already in arrears (two months or more), the applicant has a high probability of fraud. The applicants with the highest average spend on calls have a higher probability of fraud. If the amount collected changes from month to month, the likelihood of fraud is higher. Lastly, young and middle-aged applicants have an increased probability of being targeted by fraudsters than other ages.

Keywords: application fraud scorecard, predictive modeling, regression, telecommunications

Procedia PDF Downloads 78
548 Lean Production to Increase Reproducibility and Work Safety in the Laser Beam Melting Process Chain

Authors: C. Bay, A. Mahr, H. Groneberg, F. Döpper

Abstract:

Additive Manufacturing processes are becoming increasingly established in the industry for the economic production of complex prototypes and functional components. Laser beam melting (LBM), the most frequently used Additive Manufacturing technology for metal parts, has been gaining in industrial importance for several years. The LBM process chain – from material storage to machine set-up and component post-processing – requires many manual operations. These steps often depend on the manufactured component and are therefore not standardized. These operations are often not performed in a standardized manner, but depend on the experience of the machine operator, e.g., levelling of the build plate and adjusting the first powder layer in the LBM machine. This lack of standardization limits the reproducibility of the component quality. When processing metal powders with inhalable and alveolar particle fractions, the machine operator is at high risk due to the high reactivity and the toxic (e.g., carcinogenic) effect of the various metal powders. Faulty execution of the operation or unintentional omission of safety-relevant steps can impair the health of the machine operator. In this paper, all the steps of the LBM process chain are first analysed in terms of their influence on the two aforementioned challenges: reproducibility and work safety. Standardization to avoid errors increases the reproducibility of component quality as well as the adherence to and correct execution of safety-relevant operations. The corresponding lean method 5S will therefore be applied, in order to develop approaches in the form of recommended actions that standardize the work processes. These approaches will then be evaluated in terms of ease of implementation and their potential for improving reproducibility and work safety. The analysis and evaluation showed that sorting tools and spare parts as well as standardizing the workflow are likely to increase reproducibility. Organizing the operational steps and production environment decreases the hazards of material handling and consequently improves work safety.

Keywords: additive manufacturing, lean production, reproducibility, work safety

Procedia PDF Downloads 147
547 Globally Attractive Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type

Authors: Jorge Gonzalez Camus, Carlos Lizama

Abstract:

In this work is proved the existence of at least one globally attractive mild solution to the Cauchy problem, for fractional evolution equation of neutral type, involving the fractional derivate in Caputo sense. An almost sectorial operator on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Hausdorff measure of noncompactness and fixed point theorems, specifically Darbo-type. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the analytic integral resolvent operator, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, each mild solution is globally attractive, a property that is desired in asymptotic behavior for that solution.

Keywords: attractive mild solutions, integral Volterra equations, neutral type equations, non-local in time equations

Procedia PDF Downloads 119
546 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach

Authors: Mukesh Kumar Shah, Tushar Gupta

Abstract:

An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.

Keywords: economic dispatch, gaussian selection operator, prohibited operating zones, ramp rate limits

Procedia PDF Downloads 93
545 Competitiveness of a Share Autonomous Electrical Vehicle Fleet Compared to Traditional Means of Transport: A Case Study for Transportation Network Companies

Authors: Maximilian Richter

Abstract:

Implementing shared autonomous electric vehicles (SAEVs) has many advantages. The main advantages are achieved when SAEVs are offered as on-demand services by a fleet operator. However, autonomous mobility on demand (AMoD) will be distributed nationwide only if a fleet operation is economically profitable for the operator. This paper proposes a microscopic approach to modeling two implementation scenarios of an AMoD fleet. The city of Zurich is used as a case study, with the results and findings being generalizable to other similar European and North American cities. The data are based on the traffic model of the canton of Zurich (Gesamtverkehrsmodell des Kantons Zürich (GVM-ZH)). To determine financial profitability, demand is based on the simulation results and combined with analyzing the costs of a SAEV per kilometer. The results demonstrate that depending on the scenario; journeys can be offered profitably to customers for CHF 0.3 up to CHF 0.4 per kilometer. While larger fleets allowed for lower price levels and increased profits in the long term, smaller fleets exhibit elevated efficiency levels and profit opportunities per day. The paper concludes with recommendations for how fleet operators can prepare themselves to maximize profit in the autonomous future.

Keywords: autonomous vehicle, mobility on demand, traffic simulation, fleet provider

Procedia PDF Downloads 85
544 Rapid Detection of Cocaine Using Aggregation-Induced Emission and Aptamer Combined Fluorescent Probe

Authors: Jianuo Sun, Jinghan Wang, Sirui Zhang, Chenhan Xu, Hongxia Hao, Hong Zhou

Abstract:

In recent years, the diversification and industrialization of drug-related crimes have posed significant threats to public health and safety globally. The widespread and increasingly younger demographics of drug users and the persistence of drug-impaired driving incidents underscore the urgency of this issue. Drug detection, a specialized forensic activity, is pivotal in identifying and analyzing substances involved in drug crimes. It relies on pharmacological and chemical knowledge and employs analytical chemistry and modern detection techniques. However, current drug detection methods are limited by their inability to perform semi-quantitative, real-time field analyses. They require extensive, complex laboratory-based preprocessing, expensive equipment, and specialized personnel and are hindered by long processing times. This study introduces an alternative approach using nucleic acid aptamers and Aggregation-Induced Emission (AIE) technology. Nucleic acid aptamers, selected artificially for their specific binding to target molecules and stable spatial structures, represent a new generation of biosensors following antibodies. Rapid advancements in AIE technology, particularly in tetraphenyl ethene-based luminous, offer simplicity in synthesis and versatility in modifications, making them ideal for fluorescence analysis. This work successfully synthesized, isolated, and purified an AIE molecule and constructed a probe comprising the AIE molecule, nucleic acid aptamers, and exonuclease for cocaine detection. The probe demonstrated significant relative fluorescence intensity changes and selectivity towards cocaine over other drugs. Using 4-Butoxytriethylammonium Bromide Tetraphenylethene (TPE-TTA) as the fluorescent probe, the aptamer as the recognition unit, and Exo I as an auxiliary, the system achieved rapid detection of cocaine within 5 mins in aqueous and urine, with detection limits of 1.0 and 5.0 µmol/L respectively. The probe-maintained stability and interference resistance in urine, enabling quantitative cocaine detection within a certain concentration range. This fluorescent sensor significantly reduces sample preprocessing time, offers a basis for rapid onsite cocaine detection, and promises potential for miniaturized testing setups.

Keywords: drug detection, aggregation-induced emission (AIE), nucleic acid aptamer, exonuclease, cocaine

Procedia PDF Downloads 13
543 Screening of Lactobacilli and Bifidobacteria from Bangladeshi Indigenous Poultry for Their Potential Use as Probiotics

Authors: K. B. M. Islam, Syeeda Shiraj-Um-Mahmuda, Afroj Jahan, A. A. Bhuiyan

Abstract:

In Bangladesh, the use of imported probiotics in poultry is gradually being increased. But surprisingly, no probiotic bacteria have been isolated yet in Bangladesh despite the existence of scavenging native poultry as potential source that is seemingly more resistant to GIT infection as well as other diseases. Therefore, the study was undertaken to isolate, identify and characterize the potential probiotic Lactobacillus and Bifidobacteria strains from Bangladeshi indigenous poultry, and to evaluate their suitability to use in poultry industry. Crop and cecal samples from 61 healthy indigenous birds were used to isolate potential probiotics strains following conventional cultural methods. A total of 216 isolates were identified following physical, biochemical and molecular methods that belonged to the genus Lactobacillus and Bifidobacteria. An auto-aggregation test was performed for 180 and 136 isolated lactobacilli and bifidobacteria strains, respectively. Twelve lactobacilli isolates and 7 bifidobacteria isolates were selected because of their convenient aggregation. In vitro tests including antibacterial activity, resistance to low pH, hemolytic activities etc. were performed for evaluation of probiotic potential of each strain. Under the in vitro conditions and with respects to the probiotic traits, three lactobacilli; LS16, LS45, LS133 and two bifidobacteria, BS21 and BS90 were found to be potential probiotic strains. Thus, they are proposed to be evaluated for their in vivo probiotic properties. If the proposed strains are found suitable as the probiotics to be used in commercial poultry industry, it is expected that the local probiotics would be more beneficial and would save the huge amount of money that Bangladesh spends every year for the importation of such materials from abroad.

Keywords: Bangladeshi poultry, gut microbiota, lactic acid bacteria, scavenging chicken, GIT health

Procedia PDF Downloads 263
542 Evaluation of Antagonistic and Aggregation Property of Probiotic Lactic Acid Bacteria Isolated from Bovine Milk

Authors: Alazar Nebyou, Sujata Pandit

Abstract:

Lactic acid bacteria (LAB) are essential ingredients in probiotic foods, intestinal microflora, and dairy products that are capable of coping up with harsh gastrointestinal tract conditions and are available in a variety of environments. The objective of this study is to evaluate the probiotic property of LAB isolated from bovine milk. Milk samples were collected from local dairy farms. Samples were obtained using sterile test tubes and transported to a laboratory in the icebox for further biochemical characterization. Preliminary physiological and biochemical identification of LAB isolates was conducted by growing on MRS agar after ten-fold serial dilution. Seven of the best isolates were selected for the evaluation of the probiotic property. The LAB isolates were checked for resistance to antibiotics and their antimicrobial activity by disc diffusion assay and agar well diffusion assay respectively. Bile salt hydrolase activity of isolates was studied by growing isolates in a BSH medium with bile salt. Cell surface property of isolates was assayed by studying their autoaggregation and coaggregation percentage with S. aerues. All isolates were found BSH positive. In addition, BCM2 and BGM1 were susceptible to all antibiotic disks except BBM1 which was resistant to all antibiotic disks. BCM1 and BGM1 had the highest autoaggregation and coaggregation potential respectively. Since all LAB isolates showed gastrointestinal tolerance and good cell surface property they could be considered as good potential probiotic candidates for treatment and probiotic starter culture preparation.

Keywords: probiotic, aggregation, lactic acid bacteria, antimicrobial activity

Procedia PDF Downloads 169
541 A Combination of Anisotropic Diffusion and Sobel Operator to Enhance the Performance of the Morphological Component Analysis for Automatic Crack Detection

Authors: Ankur Dixit, Hiroaki Wagatsuma

Abstract:

The crack detection on a concrete bridge is an important and constant task in civil engineering. Chronically, humans are checking the bridge for inspection of cracks to maintain the quality and reliability of bridge. But this process is very long and costly. To overcome such limitations, we have used a drone with a digital camera, which took some images of bridge deck and these images are processed by morphological component analysis (MCA). MCA technique is a very strong application of sparse coding and it explores the possibility of separation of images. In this paper, MCA has been used to decompose the image into coarse and fine components with the effectiveness of two dictionaries namely anisotropic diffusion and wavelet transform. An anisotropic diffusion is an adaptive smoothing process used to adjust diffusion coefficient by finding gray level and gradient as features. These cracks in image are enhanced by subtracting the diffused coarse image into the original image and the results are treated by Sobel edge detector and binary filtering to exhibit the cracks in a fine way. Our results demonstrated that proposed MCA framework using anisotropic diffusion followed by Sobel operator and binary filtering may contribute to an automation of crack detection even in open field sever conditions such as bridge decks.

Keywords: anisotropic diffusion, coarse component, fine component, MCA, Sobel edge detector and wavelet transform

Procedia PDF Downloads 141
540 Aggregation of Electric Vehicles for Emergency Frequency Regulation of Two-Area Interconnected Grid

Authors: S. Agheb, G. Ledwich, G.Walker, Z.Tong

Abstract:

Frequency control has become more of concern for reliable operation of interconnected power systems due to the integration of low inertia renewable energy sources to the grid and their volatility. Also, in case of a sudden fault, the system has less time to recover before widespread blackouts. Electric Vehicles (EV)s have the potential to cooperate in the Emergency Frequency Regulation (EFR) by a nonlinear control of the power system in case of large disturbances. The time is not adequate to communicate with each individual EV on emergency cases, and thus, an aggregate model is necessary for a quick response to prevent from much frequency deviation and the occurrence of any blackout. In this work, an aggregate of EVs is modelled as a big virtual battery in each area considering various aspects of uncertainty such as the number of connected EVs and their initial State of Charge (SOC) as stochastic variables. A control law was proposed and applied to the aggregate model using Lyapunov energy function to maximize the rate of reduction of total kinetic energy in a two-area network after the occurrence of a fault. The control methods are primarily based on the charging/ discharging control of available EVs as shunt capacity in the distribution system. Three different cases were studied considering the locational aspect of the model with the virtual EV either in the center of the two areas or in the corners. The simulation results showed that EVs could help the generator lose its kinetic energy in a short time after a contingency. Earlier estimation of possible contributions of EVs can help the supervisory control level to transmit a prompt control signal to the subsystems such as the aggregator agents and the grid. Thus, the percentage of EVs contribution for EFR will be characterized in the future as the goal of this study.

Keywords: emergency frequency regulation, electric vehicle, EV, aggregation, Lyapunov energy function

Procedia PDF Downloads 63
539 Robust Numerical Method for Singularly Perturbed Semilinear Boundary Value Problem with Nonlocal Boundary Condition

Authors: Habtamu Garoma Debela, Gemechis File Duressa

Abstract:

In this work, our primary interest is to provide ε-uniformly convergent numerical techniques for solving singularly perturbed semilinear boundary value problems with non-local boundary condition. These singular perturbation problems are described by differential equations in which the highest-order derivative is multiplied by an arbitrarily small parameter ε (say) known as singular perturbation parameter. This leads to the existence of boundary layers, which are basically narrow regions in the neighborhood of the boundary of the domain, where the gradient of the solution becomes steep as the perturbation parameter tends to zero. Due to the appearance of the layer phenomena, it is a challenging task to provide ε-uniform numerical methods. The term 'ε-uniform' refers to identify those numerical methods in which the approximate solution converges to the corresponding exact solution (measured to the supremum norm) independently with respect to the perturbation parameter ε. Thus, the purpose of this work is to develop, analyze, and improve the ε-uniform numerical methods for solving singularly perturbed problems. These methods are based on nonstandard fitted finite difference method. The basic idea behind the fitted operator, finite difference method, is to replace the denominator functions of the classical derivatives with positive functions derived in such a way that they capture some notable properties of the governing differential equation. A uniformly convergent numerical method is constructed via nonstandard fitted operator numerical method and numerical integration methods to solve the problem. The non-local boundary condition is treated using numerical integration techniques. Additionally, Richardson extrapolation technique, which improves the first-order accuracy of the standard scheme to second-order convergence, is applied for singularly perturbed convection-diffusion problems using the proposed numerical method. Maximum absolute errors and rates of convergence for different values of perturbation parameter and mesh sizes are tabulated for the numerical example considered. The method is shown to be ε-uniformly convergent. Finally, extensive numerical experiments are conducted which support all of our theoretical findings. A concise conclusion is provided at the end of this work.

Keywords: nonlocal boundary condition, nonstandard fitted operator, semilinear problem, singular perturbation, uniformly convergent

Procedia PDF Downloads 107
538 A Palmprint Identification System Based Multi-Layer Perceptron

Authors: David P. Tantua, Abdulkader Helwan

Abstract:

Biometrics has been recently used for the human identification systems using the biological traits such as the fingerprints and iris scanning. Identification systems based biometrics show great efficiency and accuracy in such human identification applications. However, these types of systems are so far based on some image processing techniques only, which may decrease the efficiency of such applications. Thus, this paper aims to develop a human palmprint identification system using multi-layer perceptron neural network which has the capability to learn using a backpropagation learning algorithms. The developed system uses images obtained from a public database available on the internet (CASIA). The processing system is as follows: image filtering using median filter, image adjustment, image skeletonizing, edge detection using canny operator to extract features, clear unwanted components of the image. The second phase is to feed those processed images into a neural network classifier which will adaptively learn and create a class for each different image. 100 different images are used for training the system. Since this is an identification system, it should be tested with the same images. Therefore, the same 100 images are used for testing it, and any image out of the training set should be unrecognized. The experimental results shows that this developed system has a great accuracy 100% and it can be implemented in real life applications.

Keywords: biometrics, biological traits, multi-layer perceptron neural network, image skeletonizing, edge detection using canny operator

Procedia PDF Downloads 327
537 A QoS Aware Cluster Based Routing Algorithm for Wireless Mesh Network Using LZW Lossless Compression

Authors: J. S. Saini, P. P. K. Sandhu

Abstract:

The multi-hop nature of Wireless Mesh Networks and the hasty progression of throughput demands results in multi- channels and multi-radios structures in mesh networks, but the main problem of co-channels interference reduces the total throughput, specifically in multi-hop networks. Quality of Service mentions a vast collection of networking technologies and techniques that guarantee the ability of a network to make available desired services with predictable results. Quality of Service (QoS) can be directed at a network interface, towards a specific server or router's performance, or in specific applications. Due to interference among various transmissions, the QoS routing in multi-hop wireless networks is formidable task. In case of multi-channel wireless network, since two transmissions using the same channel may interfere with each other. This paper has considered the Destination Sequenced Distance Vector (DSDV) routing protocol to locate the secure and optimised path. The proposed technique also utilizes the Lempel–Ziv–Welch (LZW) based lossless data compression and intra cluster data aggregation to enhance the communication between the source and the destination. The use of clustering has the ability to aggregate the multiple packets and locates a single route using the clusters to improve the intra cluster data aggregation. The use of the LZW based lossless data compression has ability to reduce the data packet size and hence it will consume less energy, thus increasing the network QoS. The MATLAB tool has been used to evaluate the effectiveness of the projected technique. The comparative analysis has shown that the proposed technique outperforms over the existing techniques.

Keywords: WMNS, QOS, flooding, collision avoidance, LZW, congestion control

Procedia PDF Downloads 300
536 Design and Optimization of Open Loop Supply Chain Distribution Network Using Hybrid K-Means Cluster Based Heuristic Algorithm

Authors: P. Suresh, K. Gunasekaran, R. Thanigaivelan

Abstract:

Radio frequency identification (RFID) technology has been attracting considerable attention with the expectation of improved supply chain visibility for consumer goods, apparel, and pharmaceutical manufacturers, as well as retailers and government procurement agencies. It is also expected to improve the consumer shopping experience by making it more likely that the products they want to purchase are available. Recent announcements from some key retailers have brought interest in RFID to the forefront. A modified K- Means Cluster based Heuristic approach, Hybrid Genetic Algorithm (GA) - Simulated Annealing (SA) approach, Hybrid K-Means Cluster based Heuristic-GA and Hybrid K-Means Cluster based Heuristic-GA-SA for Open Loop Supply Chain Network problem are proposed. The study incorporated uniform crossover operator and combined crossover operator in GAs for solving open loop supply chain distribution network problem. The algorithms are tested on 50 randomly generated data set and compared with each other. The results of the numerical experiments show that the Hybrid K-means cluster based heuristic-GA-SA, when tested on 50 randomly generated data set, shows superior performance to the other methods for solving the open loop supply chain distribution network problem.

Keywords: RFID, supply chain distribution network, open loop supply chain, genetic algorithm, simulated annealing

Procedia PDF Downloads 117
535 Multiperson Drone Control with Seamless Pilot Switching Using Onboard Camera and Openpose Real-Time Keypoint Detection

Authors: Evan Lowhorn, Rocio Alba-Flores

Abstract:

Traditional classification Convolutional Neural Networks (CNN) attempt to classify an image in its entirety. This becomes problematic when trying to perform classification with a drone’s camera in real-time due to unpredictable backgrounds. Object detectors with bounding boxes can be used to isolate individuals and other items, but the original backgrounds remain within these boxes. These basic detectors have been regularly used to determine what type of object an item is, such as “person” or “dog.” Recent advancement in computer vision, particularly with human imaging, is keypoint detection. Human keypoint detection goes beyond bounding boxes to fully isolate humans and plot points, or Regions of Interest (ROI), on their bodies within an image. ROIs can include shoulders, elbows, knees, heads, etc. These points can then be related to each other and used in deep learning methods such as pose estimation. For drone control based on human motions, poses, or signals using the onboard camera, it is important to have a simple method for pilot identification among multiple individuals while also giving the pilot fine control options for the drone. To achieve this, the OpenPose keypoint detection network was used with body and hand keypoint detection enabled. OpenPose supports the ability to combine multiple keypoint detection methods in real-time with a single network. Body keypoint detection allows simple poses to act as the pilot identifier. The hand keypoint detection with ROIs for each finger can then offer a greater variety of signal options for the pilot once identified. For this work, the individual must raise their non-control arm to be identified as the operator and send commands with the hand on their other arm. The drone ignores all other individuals in the onboard camera feed until the current operator lowers their non-control arm. When another individual wish to operate the drone, they simply raise their arm once the current operator relinquishes control, and then they can begin controlling the drone with their other hand. This is all performed mid-flight with no landing or script editing required. When using a desktop with a discrete NVIDIA GPU, the drone’s 2.4 GHz Wi-Fi connection combined with OpenPose restrictions to only body and hand allows this control method to perform as intended while maintaining the responsiveness required for practical use.

Keywords: computer vision, drone control, keypoint detection, openpose

Procedia PDF Downloads 144
534 The Future Control Rooms for Sustainable Power Systems: Current Landscape and Operational Challenges

Authors: Signe Svensson, Remy Rey, Anna-Lisa Osvalder, Henrik Artman, Lars Nordström

Abstract:

The electric power system is undergoing significant changes. Thereby, the operation and control are becoming partly modified, more multifaceted and automated, and thereby supplementary operator skills might be required. This paper discusses developing operational challenges in future power system control rooms, posed by the evolving landscape of sustainable power systems, driven in turn by the shift towards electrification and renewable energy sources. A literature review followed by interviews and a comparison to other related domains with similar characteristics, a descriptive analysis was performed from a human factors perspective. Analysis is meant to identify trends, relationships, and challenges. A power control domain taxonomy includes a temporal domain (planning and real-time operation) and three operational domains within the power system (generation, switching and balancing). Within each operational domain, there are different control actions, either in the planning stage or in the real-time operation, that affect the overall operation of the power system. In addition to the temporal dimension, the control domains are divided in space between a multitude of different actors distributed across many different locations. A control room is a central location where different types of information are monitored and controlled, alarms are responded to, and deviations are handled by the control room operators. The operators’ competencies, teamwork skills, team shift patterns as well as control system designs are all important factors in ensuring efficient and safe electricity grid management. As the power system evolves with sustainable energy technologies, challenges are found. Questions are raised regarding whether the operators’ tacit knowledge, experience and operation skills of today are sufficient to make constructive decisions to solve modified and new control tasks, especially during disturbed operations or abnormalities. Which new skills need to be developed in planning and real-time operation to provide efficient generation and delivery of energy through the system? How should the user interfaces be developed to assist operators in processing the increasing amount of information? Are some skills at risk of being lost when the systems change? How should the physical environment and collaborations between different stakeholders within and outside the control room develop to support operator control? To conclude, the system change will provide many benefits related to electrification and renewable energy sources, but it is important to address the operators’ challenges with increasing complexity. The control tasks will be modified, and additional operator skills are needed to perform efficient and safe operations. Also, the whole human-technology-organization system needs to be considered, including the physical environment, the technical aids and the information systems, the operators’ physical and mental well-being, as well as the social and organizational systems.

Keywords: operator, process control, energy system, sustainability, future control room, skill

Procedia PDF Downloads 30
533 Dissolved Organic Nitrogen in Antibiotic Production Wastewater Treatment Plant Effluents

Authors: Ahmed Y. Kutbi, C. Russell. J. Baird, M. McNaughtan, Francis Wayman

Abstract:

Wastewaters from antibiotic production facilities are characterized with high concentrations of dissolved organic substances. Subsequently, it challenges wastewater treatment plant operator to achieve successful biological treatment and to meet regulatory emission levels. Of the dissolved organic substances, this research is investigating the fate of organic nitrogenous compounds (i.e., Chitin) in an antibiotic production wastewater treatment plant located in Irvine, Scotland and its impact on the WWTP removal performance. Dissolved organic nitrogen (DON) in WWTP effluents are of significance because 1) its potential to cause eutrophication in receiving waters, 2) the formation of nitrogenous disinfection by products in drinking waters and 3) limits WWTPs ability to achieve very low total nitrogen (TN) emissions limits (5 – 25 mg/l). The latter point is where the knowledge gap lays between the operator and the regulator in setting viable TN emission levels. The samples collected from Irvine site at the different stages of the treatment were analyzed for TN and DON. Results showed that the average TN in the WWTP influents and effluents are 798 and 261 mg/l respectively, in other words, the plant achieved 67 % removal of TN. DON Represented 51% of the influents TN, while the effluents accounted 26 % of the TN concentrations. Therefore, an ongoing investigation is carried out to identify DON constituents in WWTP effluent and evaluate its impact on the WWTP performance and its potential bioavailability for algae in receiving waters, which is, in this case, Irvine Bay.

Keywords: biological wastewater treatment plant, dissolved organic nitrogen, bio-availability, Irvine Bay

Procedia PDF Downloads 214
532 Development of a Mixed-Reality Hands-Free Teleoperated Robotic Arm for Construction Applications

Authors: Damith Tennakoon, Mojgan Jadidi, Seyedreza Razavialavi

Abstract:

With recent advancements of automation in robotics, from self-driving cars to autonomous 4-legged quadrupeds, one industry that has been stagnant is the construction industry. The methodologies used in a modern-day construction site consist of arduous physical labor and the use of heavy machinery, which has not changed over the past few decades. The dangers of a modern-day construction site affect the health and safety of the workers due to performing tasks such as lifting and moving heavy objects and having to maintain unhealthy posture to complete repetitive tasks such as painting, installing drywall, and laying bricks. Further, training for heavy machinery is costly and requires a lot of time due to their complex control inputs. The main focus of this research is using immersive wearable technology and robotic arms to perform the complex and intricate skills of modern-day construction workers while alleviating the physical labor requirements to perform their day-to-day tasks. The methodology consists of mounting a stereo vision camera, the ZED Mini by Stereolabs, onto the end effector of an industrial grade robotic arm, streaming the video feed into the Virtual Reality (VR) Meta Quest 2 (Quest 2) head-mounted display (HMD). Due to the nature of stereo vision, and the similar field-of-views between the stereo camera and the Quest 2, human-vision can be replicated on the HMD. The main advantage this type of camera provides over a traditional monocular camera is it gives the user wearing the HMD a sense of the depth of the camera scene, specifically, a first-person view of the robotic arm’s end effector. Utilizing the built-in cameras of the Quest 2 HMD, open-source hand-tracking libraries from OpenXR can be implemented to track the user’s hands in real-time. A mixed-reality (XR) Unity application can be developed to localize the operator's physical hand motions with the end-effector of the robotic arm. Implementing gesture controls will enable the user to move the robotic arm and control its end-effector by moving the operator’s arm and providing gesture inputs from a distant location. Given that the end effector of the robotic arm is a gripper tool, gripping and opening the operator’s hand will translate to the gripper of the robot arm grabbing or releasing an object. This human-robot interaction approach provides many benefits within the construction industry. First, the operator’s safety will be increased substantially as they can be away from the site-location while still being able perform complex tasks such as moving heavy objects from place to place or performing repetitive tasks such as painting walls and laying bricks. The immersive interface enables precision robotic arm control and requires minimal training and knowledge of robotic arm manipulation, which lowers the cost for operator training. This human-robot interface can be extended to many applications, such as handling nuclear accident/waste cleanup, underwater repairs, deep space missions, and manufacturing and fabrication within factories. Further, the robotic arm can be mounted onto existing mobile robots to provide access to hazardous environments, including power plants, burning buildings, and high-altitude repair sites.

Keywords: construction automation, human-robot interaction, hand-tracking, mixed reality

Procedia PDF Downloads 26
531 Pharmacogenetics of P2Y12 Receptor Inhibitors

Authors: Ragy Raafat Gaber Attaalla

Abstract:

For cardiovascular illness, oral P2Y12 inhibitors including clopidogrel, prasugrel, and ticagrelor are frequently recommended. Each of these medications has advantages and disadvantages. In the absence of genotyping, it has been demonstrated that the stronger platelet aggregation inhibitors prasugrel and ticagrelor are superior than clopidogrel at preventing significant adverse cardiovascular events following an acute coronary syndrome and percutaneous coronary intervention (PCI). Both, nevertheless, come with a higher risk of bleeding unrelated to a coronary artery bypass. As a prodrug, clopidogrel needs to be bioactivated, principally by the CYP2C19 enzyme. A CYP2C19 no function allele and diminished or absent CYP2C19 enzyme activity are present in about 30% of people. The reduced exposure to the active metabolite of clopidogrel and reduced inhibition of platelet aggregation among clopidogrel-treated carriers of a CYP2C19 no function allele likely contributed to the reduced efficacy of clopidogrel in clinical trials. Clopidogrel's pharmacogenetic results are strongest when used in conjunction with PCI, but evidence for other indications is growing. One of the most typical examples of clinical pharmacogenetic application is CYP2C19 genotype-guided antiplatelet medication following PCI. Guidance is available from expert consensus groups and regulatory bodies to assist with incorporating genetic information into P2Y12 inhibitor prescribing decisions. Here, we examine the data supporting genotype-guided P2Y12 inhibitor selection's effects on clopidogrel response and outcomes and discuss tips for pharmacogenetic implementation. We also discuss procedures for using genotype data to choose P2Y12 inhibitor therapies as well as any unmet research needs. Finally, choosing a P2Y12 inhibitor medication that optimally balances the atherothrombotic and bleeding risks may be influenced by both clinical and genetic factors.

Keywords: inhibitors, cardiovascular events, coronary intervention, pharmacogenetic implementation

Procedia PDF Downloads 74
530 Cache Analysis and Software Optimizations for Faster on-Chip Network Simulations

Authors: Khyamling Parane, B. M. Prabhu Prasad, Basavaraj Talawar

Abstract:

Fast simulations are critical in reducing time to market in CMPs and SoCs. Several simulators have been used to evaluate the performance and power consumed by Network-on-Chips. Researchers and designers rely upon these simulators for design space exploration of NoC architectures. Our experiments show that simulating large NoC topologies take hours to several days for completion. To speed up the simulations, it is necessary to investigate and optimize the hotspots in simulator source code. Among several simulators available, we choose Booksim2.0, as it is being extensively used in the NoC community. In this paper, we analyze the cache and memory system behaviour of Booksim2.0 to accurately monitor input dependent performance bottlenecks. Our measurements show that cache and memory usage patterns vary widely based on the input parameters given to Booksim2.0. Based on these measurements, the cache configuration having least misses has been identified. To further reduce the cache misses, we use software optimization techniques such as removal of unused functions, loop interchanging and replacing post-increment operator with pre-increment operator for non-primitive data types. The cache misses were reduced by 18.52%, 5.34% and 3.91% by employing above technology respectively. We also employ thread parallelization and vectorization to improve the overall performance of Booksim2.0. The OpenMP programming model and SIMD are used for parallelizing and vectorizing the more time-consuming portions of Booksim2.0. Speedups of 2.93x and 3.97x were observed for the Mesh topology with 30 × 30 network size by employing thread parallelization and vectorization respectively.

Keywords: cache behaviour, network-on-chip, performance profiling, vectorization

Procedia PDF Downloads 159
529 Performance Evaluation and Plugging Characteristics of Controllable Self-Aggregating Colloidal Particle Profile Control Agent

Authors: Zhiguo Yang, Xiangan Yue, Minglu Shao, Yue Yang, Rongjie Yan

Abstract:

It is difficult to realize deep profile control because of the small pore-throats and easy water channeling in low-permeability heterogeneous reservoir, and the traditional polymer microspheres have the contradiction between injection and plugging. In order to solve this contradiction, the controllable self-aggregating colloidal particles (CSA) containing amide groups on the surface of microspheres was prepared based on emulsion polymerization of styrene and acrylamide. The dispersed solution of CSA colloidal particles, whose particle size is much smaller than the diameter of pore-throats, was injected into the reservoir. When the microspheres migrated to the deep part of reservoir, , these CSA colloidal particles could automatically self-aggregate into large particle clusters under the action of the shielding agent and the control agent, so as to realize the plugging of the water channels. In this paper, the morphology, temperature resistance and self-aggregation properties of CSA microspheres were studied by transmission electron microscopy (TEM) and bottle test. The results showed that CSA microspheres exhibited heterogeneous core-shell structure, good dispersion, and outstanding thermal stability. The microspheres remain regular and uniform spheres at 100℃ after aging for 35 days. With the increase of the concentration of the cations, the self-aggregation time of CSA was gradually shortened, and the influence of bivalent cations was greater than that of monovalent cations. Core flooding experiments showed that CSA polymer microspheres have good injection properties, CSA particle clusters can effective plug the water channels and migrate to the deep part of the reservoir for profile control.

Keywords: heterogeneous reservoir, deep profile control, emulsion polymerization, colloidal particles, plugging characteristic

Procedia PDF Downloads 183
528 Proinflammatory Response of Agglomerated TiO2 Nanoparticles in Human-Immune Cells

Authors: Vaiyapuri Subbarayn Periasamy, Jegan Athinarayanan, Ali A. Alshatwi

Abstract:

The widespread use of Titanium oxide nanoparticles (TiO2-NPs), now are found with different physic-chemical properties (size, shape, chemical properties, agglomeration, etc.) in many processed foods, agricultural chemicals, biomedical products, food packaging and food contact materials, personal care products, and other consumer products used in daily life. Growing evidences have been highlighted that there are risks of physico-chemical properties dependent toxicity with special attention to “TiO2-NPs and human immune system”. Unfortunately, agglomeration and aggregation have frequently been ignored in immuno-toxicological studies, even though agglomeration and aggregation would be expected to affect nanotoxicity since it changes the size, shape, surface area, and other properties of the TiO2-NPs. In this present investigation, we assessed the immune toxic effect of TiO2-NPs on human immune cells Total WBC including Lymphocytes (T cells (CD3+), T helper cells (CD3+, CD4+), Suppressor/cytotoxic T cells (CD3+/CD8+) and NK cells (CD3-/CD16+ and CD56+), Monocytes (CD14+, CD3-) and B lymphocytes (CD19+, CD3-) in order to find the immunological response (IL1A, IL1B, IL2 IL-4, IL5 IL-6, IL-10, IL-12, IL-13, IFN-γ, TGF-β, and TNF-a) and redox gene regulation (TNF, p53, BCl-2, CAT, GSTA4, TNF, CYP1A, POR, SOD1, GSTM3, GPX1, and GSR1)-linking physicochemical properties with special reference to agglomeration of TiO2-NPs. Our findings suggest that TiO2-NPs altered cytokine production, enhanced phagocytic indexing, metabolic stress through specific immune regulatory- genes expression in different WBC subsets and may contribute to pro-inflammatory response. Although TiO2-NPs have great advantages in the personal care products, biomedical, food and agricultural products, its chronic and acute immune-toxicity still need to be assessed carefully with special reference to food and environmental safety.

Keywords: TiO2 nanoparticles, oxidative stress, cytokine, human immune cells

Procedia PDF Downloads 362
527 Multiscale Entropy Analysis of Electroencephalogram (EEG) of Alcoholic and Control Subjects

Authors: Lal Hussain, Wajid Aziz, Imtiaz Ahmed Awan, Sharjeel Saeed

Abstract:

Multiscale entropy analysis (MSE) is a useful technique recently developed to quantify the dynamics of physiological signals at different time scales. This study is aimed at investigating the electroencephalogram (EEG) signals to analyze the background activity of alcoholic and control subjects by inspecting various coarse-grained sequences formed at different time scales. EEG recordings of alcoholic and control subjects were taken from the publically available machine learning repository of University of California (UCI) acquired using 64 electrodes. The MSE analysis was performed on the EEG data acquired from all the electrodes of alcoholic and control subjects. Mann-Whitney rank test was used to find significant differences between the groups and result were considered statistically significant for p-values<0.05. The area under receiver operator curve was computed to find the degree separation between the groups. The mean ranks of MSE values at all the times scales for all electrodes were higher control subject as compared to alcoholic subjects. Higher mean ranks represent higher complexity and vice versa. The finding indicated that EEG signals acquired through electrodes C3, C4, F3, F7, F8, O1, O2, P3, T7 showed significant differences between alcoholic and control subjects at time scales 1 to 5. Moreover, all electrodes exhibit significance level at different time scales. Likewise, the highest accuracy and separation was obtained at the central region (C3 and C4), front polar regions (P3, O1, F3, F7, F8 and T8) while other electrodes such asFp1, Fp2, P4 and F4 shows no significant results.

Keywords: electroencephalogram (EEG), multiscale sample entropy (MSE), Mann-Whitney test (MMT), Receiver Operator Curve (ROC), complexity analysis

Procedia PDF Downloads 342
526 Timetabling for Interconnected LRT Lines: A Package Solution Based on a Real-world Case

Authors: Huazhen Lin, Ruihua Xu, Zhibin Jiang

Abstract:

In this real-world case, timetabling the LRT network as a whole is rather challenging for the operator: they are supposed to create a timetable to avoid various route conflicts manually while satisfying a given interval and the number of rolling stocks, but the outcome is not satisfying. Therefore, the operator adopts a computerised timetabling tool, the Train Plan Maker (TPM), to cope with this problem. However, with various constraints in the dual-line network, it is still difficult to find an adequate pairing of turnback time, interval and rolling stocks’ number, which requires extra manual intervention. Aiming at current problems, a one-off model for timetabling is presented in this paper to simplify the procedure of timetabling. Before the timetabling procedure starts, this paper presents how the dual-line system with a ring and several branches is turned into a simpler structure. Then, a non-linear programming model is presented in two stages. In the first stage, the model sets a series of constraints aiming to calculate a proper timing for coordinating two lines by adjusting the turnback time at termini. Then, based on the result of the first stage, the model introduces a series of inequality constraints to avoid various route conflicts. With this model, an analysis is conducted to reveal the relation between the ratio of trains in different directions and the possible minimum interval, observing that the more imbalance the ratio is, the less possible to provide frequent service under such strict constraints.

Keywords: light rail transit (LRT), non-linear programming, railway timetabling, timetable coordination

Procedia PDF Downloads 28
525 Development of Building Information Modeling in Property Industry: Beginning with Building Information Modeling Construction

Authors: B. Godefroy, D. Beladjine, K. Beddiar

Abstract:

In France, construction BIM actors commonly evoke the BIM gains for exploitation by integrating of the life cycle of a building. The standardization of level 7 of development would achieve this stage of the digital model. The householders include local public authorities, social landlords, public institutions (health and education), enterprises, facilities management companies. They have a dual role: owner and manager of their housing complex. In a context of financial constraint, the BIM of exploitation aims to control costs, make long-term investment choices, renew the portfolio and enable environmental standards to be met. It assumes a knowledge of the existing buildings, marked by its size and complexity. The information sought must be synthetic and structured, it concerns, in general, a real estate complex. We conducted a study with professionals about their concerns and ways to use it to see how householders could benefit from this development. To obtain results, we had in mind the recurring interrogation of the project management, on the needs of the operators, we tested the following stages: 1) Inculcate a minimal culture of BIM with multidisciplinary teams of the operator then by business, 2) Learn by BIM tools, the adaptation of their trade in operations, 3) Understand the place and creation of a graphic and technical database management system, determine the components of its library so their needs, 4) Identify the cross-functional interventions of its managers by business (operations, technical, information system, purchasing and legal aspects), 5) Set an internal protocol and define the BIM impact in their digital strategy. In addition, continuity of management by the integration of construction models in the operation phase raises the question of interoperability in the control of the production of IFC files in the operator’s proprietary format and the export and import processes, a solution rivaled by the traditional method of vectorization of paper plans. Companies that digitize housing complex and those in FM produce a file IFC, directly, according to their needs without recourse to the model of construction, they produce models business for the exploitation. They standardize components, equipment that are useful for coding. We observed the consequences resulting from the use of the BIM in the property industry and, made the following observations: a) The value of data prevail over the graphics, 3D is little used b) The owner must, through his organization, promote the feedback of technical management information during the design phase c) The operator's reflection on outsourcing concerns the acquisition of its information system and these services, observing the risks and costs related to their internal or external developments. This study allows us to highlight: i) The need for an internal organization of operators prior to a response to the construction management ii) The evolution towards automated methods for creating models dedicated to the exploitation, a specialization would be required iii) A review of the communication of the project management, management continuity not articulating around his building model, it must take into account the environment of the operator and reflect on its scope of action.

Keywords: information system, interoperability, models for exploitation, property industry

Procedia PDF Downloads 114