Search results for: real time simulator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20541

Search results for: real time simulator

19551 Application of Optical Method for Calcul of Deformed Object Samples

Authors: R. Daira

Abstract:

The electronic speckle interferometry technique used to measure the deformations of scatterers process is based on the subtraction of interference patterns. A speckle image is first recorded before deformation of the object in the RAM of a computer, after a second deflection. The square of the difference between two images showing correlation fringes observable in real time directly on monitor. The interpretation these fringes to determine the deformation. In this paper, we present experimental results of deformation out of the plane of two samples in aluminum, electronic boards and stainless steel.

Keywords: optical method, holography, interferometry, deformation

Procedia PDF Downloads 391
19550 Marine Environmental Monitoring Using an Open Source Autonomous Marine Surface Vehicle

Authors: U. Pruthviraj, Praveen Kumar R. A. K. Athul, K. V. Gangadharan, S. Rao Shrikantha

Abstract:

An open source based autonomous unmanned marine surface vehicle (UMSV) is developed for some of the marine applications such as pollution control, environmental monitoring and thermal imaging. A double rotomoulded hull boat is deployed which is rugged, tough, quick to deploy and moves faster. It is suitable for environmental monitoring, and it is designed for easy maintenance. A 2HP electric outboard marine motor is used which is powered by a lithium-ion battery and can also be charged from a solar charger. All connections are completely waterproof to IP67 ratings. In full throttle speed, the marine motor is capable of up to 7 kmph. The motor is integrated with an open source based controller using cortex M4F for adjusting the direction of the motor. This UMSV can be operated by three modes: semi-autonomous, manual and fully automated. One of the channels of a 2.4GHz radio link 8 channel transmitter is used for toggling between different modes of the USMV. In this electric outboard marine motor an on board GPS system has been fitted to find the range and GPS positioning. The entire system can be assembled in the field in less than 10 minutes. A Flir Lepton thermal camera core, is integrated with a 64-bit quad-core Linux based open source processor, facilitating real-time capturing of thermal images and the results are stored in a micro SD card which is a data storage device for the system. The thermal camera is interfaced to an open source processor through SPI protocol. These thermal images are used for finding oil spills and to look for people who are drowning at low visibility during the night time. A Real Time clock (RTC) module is attached with the battery to provide the date and time of thermal images captured. For the live video feed, a 900MHz long range video transmitter and receiver is setup by which from a higher power output a longer range of 40miles has been achieved. A Multi-parameter probe is used to measure the following parameters: conductivity, salinity, resistivity, density, dissolved oxygen content, ORP (Oxidation-Reduction Potential), pH level, temperature, water level and pressure (absolute).The maximum pressure it can withstand 160 psi, up to 100m. This work represents a field demonstration of an open source based autonomous navigation system for a marine surface vehicle.

Keywords: open source, autonomous navigation, environmental monitoring, UMSV, outboard motor, multi-parameter probe

Procedia PDF Downloads 225
19549 All at Sea: Why OT / IT Infrastructure Is So Complex and the Challenges of Securing These on a Cruise Ship

Authors: Ken Munro

Abstract:

Cruise ships are possibly the most complex collection of systems it is possible to find in one physical, moving location. Propulsion, navigation, power generation and more, combined with a hotel, restaurant, casino, theatre etc, with safety and fire control systems to boot. That complexity creates huge challenges with keeping OT and IT systems apart. Ships engines are often remotely managed, network segregation is often defeated through troubleshooting when at sea. This session will refer to multiple entertaining and informative tales of taking control of ships, including accessing a ships Azipods via a game simulator for passengers. Fortunately, genuine attacks against vessels are very rare, but the effects and impacts to world trade are becoming increasingly obvious.

Keywords: maritime security, cybersecurity, OT, IT, networks

Procedia PDF Downloads 4
19548 Rate of Force Development, Net Impulse and Modified Reactive Strength as Predictors of Volleyball Spike Jump Height among Young Elite Players

Authors: Javad Sarvestan, Zdenek Svoboda

Abstract:

Force-time (F-T) curvature characteristics are globally referenced as the main indicators of athletic jump performance. Nevertheless, to the best of authors’ knowledge, no investigation tried to deeply study the relationship between F-T curve variables and real-game jump performance among elite volleyball players. To this end, this study was designated to investigate the association between F-T curve variables, including movement timings, force, velocity, power, rate of force development (RFD), modified reactive strength index (RSImod), and net impulse with spike jump height during real-game circumstances. Twelve young elite volleyball players performed 3 countermovement jump (CMJ) and 3 spike jump in real-game circumstances with 1-minute rest intervals to prevent fatigue. Shapiro-Wilk statistical test illustrated the normality of data distribution, and Pearson’s product correlation test portrayed a significant correlation between CMJ height and peak RFD (0.85), average RFD (r=0.81), RSImod (r=0.88) and concentric net impulse (r=0.98), and also significant correlation between spike jump height and peak RFD (0.73), average RFD (r=0.80), RSImod (r=0.62) and concentric net impulse (r=0.71). Multiple regression analysis also reported that these factors have a strong contribution in predicting of CMJ (98%) and spike jump (77%) heights. Outcomes of this study confirm that the RFD, concentric net impulse, and RSImod values could precisely monitor and track the volleyball attackers’ explosive strength, muscular stretch-shortening cycle function efficiency, and ultimate spike jump height. To this effect, volleyball coaches and trainers are advised to have an in-depth focus on their athletes’ progression or the impacts of strength trainings by observing and chasing the F-T curve variables such as RFD, net impulse, and RSImod.

Keywords: net impulse, reactive strength index, rate of force development, stretch-shortening cycle

Procedia PDF Downloads 127
19547 Validation Study of Radial Aircraft Engine Model

Authors: Lukasz Grabowski, Tytus Tulwin, Michal Geca, P. Karpinski

Abstract:

This paper presents the radial aircraft engine model which has been created in AVL Boost software. This model is a one-dimensional physical model of the engine, which enables us to investigate the impact of an ignition system design on engine performance (power, torque, fuel consumption). In addition, this model allows research under variable environmental conditions to reflect varied flight conditions (altitude, humidity, cruising speed). Before the simulation research the identifying parameters and validating of model were studied. In order to verify the feasibility to take off power of gasoline radial aircraft engine model, some validation study was carried out. The first stage of the identification was completed with reference to the technical documentation provided by manufacturer of engine and the experiments on the test stand of the real engine. The second stage involved a comparison of simulation results with the results of the engine stand tests performed on a WSK ’PZL-Kalisz’. The engine was loaded by a propeller in a special test bench. Identifying the model parameters referred to a comparison of the test results to the simulation in terms of: pressure behind the throttles, pressure in the inlet pipe, and time course for pressure in the first inlet pipe, power, and specific fuel consumption. Accordingly, the required coefficients and error of simulation calculation relative to the real-object experiments were determined. Obtained the time course for pressure and its value is compatible with the experimental results. Additionally the engine power and specific fuel consumption tends to be significantly compatible with the bench tests. The mapping error does not exceed 1.5%, which verifies positively the model of combustion and allows us to predict engine performance if the process of combustion will be modified. The next conducted tests verified completely model. The maximum mapping error for the pressure behind the throttles and the inlet pipe pressure is 4 %, which proves the model of the inlet duct in the engine with the charging compressor to be correct.

Keywords: 1D-model, aircraft engine, performance, validation

Procedia PDF Downloads 323
19546 Multi-Subpopulation Genetic Algorithm with Estimation of Distribution Algorithm for Textile Batch Dyeing Scheduling Problem

Authors: Nhat-To Huynh, Chen-Fu Chien

Abstract:

Textile batch dyeing scheduling problem is complicated which includes batch formation, batch assignment on machines, batch sequencing with sequence-dependent setup time. Most manufacturers schedule their orders manually that are time consuming and inefficient. More power methods are needed to improve the solution. Motivated by the real needs, this study aims to propose approaches in which genetic algorithm is developed with multi-subpopulation and hybridised with estimation of distribution algorithm to solve the constructed problem for minimising the makespan. A heuristic algorithm is designed and embedded into the proposed algorithms to improve the ability to get out of the local optima. In addition, an empirical study is conducted in a textile company in Taiwan to validate the proposed approaches. The results have showed that proposed approaches are more efficient than simulated annealing algorithm.

Keywords: estimation of distribution algorithm, genetic algorithm, multi-subpopulation, scheduling, textile dyeing

Procedia PDF Downloads 288
19545 An Indoor Positioning System in Wireless Sensor Networks with Measurement Delay

Authors: Pyung Soo Kim, Eung Hyuk Lee, Mun Suck Jang

Abstract:

In the current paper, an indoor positioning system is proposed with consideration of measurement delay. Firstly, an estimation filter with a measurement delay is designed for the indoor positioning mechanism under a weighted least square criterion, which utilizes only finite measurements on the most recent window. The proposed estimation filtering based scheme gives the filtered estimates for position, velocity and acceleration of moving target in real-time, while removing undesired noisy effects and preserving desired moving positions. Secondly, the proposed scheme is shown to have good inherent properties such as unbiasedness, efficiency, time-invariance, deadbeat, and robustness due to the finite memory structure. Finally, computer simulations shows that the performance of the proposed estimation filtering based scheme can outperform to the existing infinite memory filtering based mechanism.

Keywords: indoor positioning system, wireless sensor networks, measurement delay

Procedia PDF Downloads 467
19544 The Effects of Advisor Status and Time Pressure on Decision-Making in a Luggage Screening Task

Authors: Rachel Goh, Alexander McNab, Brent Alsop, David O'Hare

Abstract:

In a busy airport, the decision whether to take passengers aside and search their luggage for dangerous items can have important consequences. If an officer fails to search and stop a bag containing a dangerous object, a life-threatening incident might occur. But stopping a bag unnecessarily means that the officer might lose time searching the bag and face an angry passenger. Passengers’ bags, however, are often cluttered with personal belongings of varying shapes and sizes. It can be difficult to determine what is dangerous or not, especially if the decisions must be made quickly in cases of busy flight schedules. Additionally, the decision to search bags is often made with input from the surrounding officers on duty. This scenario raises several questions: 1) Past findings suggest that humans are more reliant on an automated aid when under time pressure in a visual search task, but does this translate to human-human reliance? 2) Are humans more likely to agree with another person if the person is assumed to be an expert or a novice in these ambiguous situations? In the present study, forty-one participants performed a simulated luggage-screening task. They were partnered with an advisor of two different statuses (expert vs. novice), but of equal accuracy (90% correct). Participants made two choices each trial: their first choice with no advisor input, and their second choice after advisor input. The second choice was made within either 2 seconds or 8 seconds; failure to do so resulted in a long time-out period. Under the 2-second time pressure, participants were more likely to disagree with their own first choice and agree with the expert advisor, regardless of whether the expert was right or wrong, but especially when the expert suggested that the bag was safe. The findings indicate a tendency for people to assume less responsibility for their decisions and defer to their partner, especially when a quick decision is required. This over-reliance on others’ opinions might have negative consequences in real life, particularly when relying on fallible human judgments. More awareness is needed regarding how a stressful environment may influence reliance on other’s opinions, and how better techniques are needed to make the best decisions under high stress and time pressure.

Keywords: advisors, decision-making, time pressure, trust

Procedia PDF Downloads 165
19543 Remote BioMonitoring of Mothers and Newborns for Temperature Surveillance Using a Smart Wearable Sensor: Techno-Feasibility Study and Clinical Trial in Southern India

Authors: Prem K. Mony, Bharadwaj Amrutur, Prashanth Thankachan, Swarnarekha Bhat, Suman Rao, Maryann Washington, Annamma Thomas, N. Sheela, Hiteshwar Rao, Sumi Antony

Abstract:

The disease burden among mothers and newborns is caused mostly by a handful of avoidable conditions occurring around the time of childbirth and within the first month following delivery. Real-time monitoring of vital parameters of mothers and neonates offers a potential opportunity to impact access as well as the quality of care in vulnerable populations. We describe the design, development and testing of an innovative wearable device for remote biomonitoring (RBM) of body temperatures in mothers and neonates in a hospital in southern India. The architecture consists of: [1] a low-cost, wearable sensor tag; [2] a gateway device for ‘real-time’ communication link; [3] piggy-backing on a commercial GSM communication network; and [4] an algorithm-based data analytics system. Requirements for the device were: long battery-life upto 28 days (with sampling frequency 5/hr); robustness; IP 68 hermetic sealing; and human-centric design. We undertook pre-clinical laboratory testing followed by clinical trial phases I & IIa for evaluation of safety and efficacy in the following sequence: seven healthy adult volunteers; 18 healthy mothers; and three sets of babies – 3 healthy babies; 10 stable babies in the Neonatal Intensive Care Unit (NICU) and 1 baby with hypoxic ischaemic encephalopathy (HIE). The 3-coin thickness, pebble-design sensor weighing about 8 gms was secured onto the abdomen for the baby and over the upper arm for adults. In the laboratory setting, the response-time of the sensor device to attain thermal equilibrium with the surroundings was 4 minutes vis-a-vis 3 minutes observed with a precision-grade digital thermometer used as a reference standard. The accuracy was ±0.1°C of the reference standard within the temperature range of 25-40°C. The adult volunteers, aged 20 to 45 years, contributed a total of 345 hours of readings over a 7-day period and the postnatal mothers provided a total of 403 paired readings. The mean skin temperatures measured in the adults by the sensor were about 2°C lower than the axillary temperature readings (sensor =34.1 vs digital = 36.1); this difference was statistically significant (t-test=13.8; p<0.001). The healthy neonates provided a total of 39 paired readings; the mean difference in temperature was 0.13°C (sensor =36.9 vs digital = 36.7; p=0.2). The neonates in the NICU provided a total of 130 paired readings. Their mean skin temperature measured by the sensor was 0.6°C lower than that measured by the radiant warmer probe (sensor =35.9 vs warmer probe = 36.5; p < 0.001). The neonate with HIE provided a total of 25 paired readings with the mean sensor reading being not different from the radian warmer probe reading (sensor =33.5 vs warmer probe = 33.5; p=0.8). No major adverse events were noted in both the adults and neonates; four adult volunteers reported mild sweating under the device/arm band and one volunteer developed mild skin allergy. This proof-of-concept study shows that real-time monitoring of temperatures is technically feasible and that this innovation appears to be promising in terms of both safety and accuracy (with appropriate calibration) for improved maternal and neonatal health.

Keywords: public health, remote biomonitoring, temperature surveillance, wearable sensors, mothers and newborns

Procedia PDF Downloads 192
19542 Intelligent Transport System: Classification of Traffic Signs Using Deep Neural Networks in Real Time

Authors: Anukriti Kumar, Tanmay Singh, Dinesh Kumar Vishwakarma

Abstract:

Traffic control has been one of the most common and irritating problems since the time automobiles have hit the roads. Problems like traffic congestion have led to a significant time burden around the world and one significant solution to these problems can be the proper implementation of the Intelligent Transport System (ITS). It involves the integration of various tools like smart sensors, artificial intelligence, position technologies and mobile data services to manage traffic flow, reduce congestion and enhance driver's ability to avoid accidents during adverse weather. Road and traffic signs’ recognition is an emerging field of research in ITS. Classification problem of traffic signs needs to be solved as it is a major step in our journey towards building semi-autonomous/autonomous driving systems. The purpose of this work focuses on implementing an approach to solve the problem of traffic sign classification by developing a Convolutional Neural Network (CNN) classifier using the GTSRB (German Traffic Sign Recognition Benchmark) dataset. Rather than using hand-crafted features, our model addresses the concern of exploding huge parameters and data method augmentations. Our model achieved an accuracy of around 97.6% which is comparable to various state-of-the-art architectures.

Keywords: multiclass classification, convolution neural network, OpenCV

Procedia PDF Downloads 159
19541 Are Some Languages Harder to Learn and Teach Than Others?

Authors: David S. Rosenstein

Abstract:

The author believes that modern spoken languages should be equally difficult (or easy) to learn, since all normal children learning their native languages do so at approximately the same rate and with the same competence, progressing from easy to more complex grammar and syntax in the same way. Why then, do some languages seem more difficult than others? Perhaps people are referring to the written language, where it may be true that mastering Chinese requires more time than French, which in turn requires more time than Spanish. But this may be marginal, since Chinese and French children quickly catch up to their Spanish peers in reading comprehension. Rather, the real differences in difficulty derive from two sources: hardened L1 language habits trying to cope with contrasting L2 habits; and unfamiliarity with unique L2 characteristics causing faulty expectations. It would seem that effective L2 teaching and learning must take these two sources of difficulty into consideration. The author feels that the latter (faulty expectations) causes the greatest difficulty, making effective teaching and learning somewhat different for each given foreign language. Examples from Chinese and other languages are presented.

Keywords: learning different languages, language learning difficulties, faulty language expectations

Procedia PDF Downloads 518
19540 Lipopolysaccharide Induced Avian Innate Immune Expression in Heterophils

Authors: Rohita Gupta, G. S. Brah, R. Verma, C. S. Mukhopadhayay

Abstract:

Although chicken strains show differences in susceptibility to a number of diseases, the underlying immunological basis is yet to be elucidated. In the present study, heterophils were subjected to LPS stimulation and total RNA extraction, further differential gene expression was studied in broiler, layer and indigenous Aseel strain by Real Time RT-PCR at different time periods before and after induction. The expression of the 14 AvBDs and chTLR 1, 2, 3, 4, 5, 7, 15 and 21 was detectable in heterophils. The expression level of most of the AvBDs significantly increased (P<0.05) 3 hours post in vitro lipopolysaccharide challenge. Higher expression level and stronger activation of most AvBDs, NFkB-1 and IRF-3 in heterophils was observed, with the stimulation of LPS in layer compared to broiler, and in Aseel compared to both layer and broiler. This investigation will allow more refined interpretation of immuno-genetic basis of the variable disease resistance/susceptibility in divergent stock of chicken including indigenous breed. Moreover this study will be helpful in formulation of strategy for isolation of antimicrobial peptides from heterophils.

Keywords: differential expression, heterophils, cytokines, defensin, TLR

Procedia PDF Downloads 598
19539 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 78
19538 H.263 Based Video Transceiver for Wireless Camera System

Authors: Won-Ho Kim

Abstract:

In this paper, a design of H.263 based wireless video transceiver is presented for wireless camera system. It uses standard WIFI transceiver and the covering area is up to 100m. Furthermore the standard H.263 video encoding technique is used for video compression since wireless video transmitter is unable to transmit high capacity raw data in real time and the implemented system is capable of streaming at speed of less than 1Mbps using NTSC 720x480 video.

Keywords: wireless video transceiver, video surveillance camera, H.263 video encoding digital signal processing

Procedia PDF Downloads 353
19537 Throughput of Point Coordination Function (PCF)

Authors: Faisel Eltuhami Alzaalik, Omar Imhemed Alramli, Ahmed Mohamed Elaieb

Abstract:

The IEEE 802.11 defines two modes of MAC, distributed coordination function (DCF) and point coordination function (PCF) mode. The first sub-layer of the MAC is the distributed coordination function (DCF). A contention algorithm is used via DCF to provide access to all traffic. The point coordination function (PCF) is the second sub-layer used to provide contention-free service. PCF is upper DCF and it uses features of DCF to establish guarantee access of its users. Some papers and researches that have been published in this technology were reviewed in this paper, as well as talking briefly about the distributed coordination function (DCF) technology. The simulation of the PCF function have been applied by using a simulation program called network simulator (NS2) and have been found out the throughput of a transmitter system by using this function.

Keywords: DCF, PCF, throughput, NS2

Procedia PDF Downloads 565
19536 Online Monitoring and Control of Continuous Mechanosynthesis by UV-Vis Spectrophotometry

Authors: Darren A. Whitaker, Dan Palmer, Jens Wesholowski, James Flaherty, John Mack, Ahmad B. Albadarin, Gavin Walker

Abstract:

Traditional mechanosynthesis has been performed by either ball milling or manual grinding. However, neither of these techniques allow the easy application of process control. The temperature may change unpredictably due to friction in the process. Hence the amount of energy transferred to the reactants is intrinsically non-uniform. Recently, it has been shown that the use of Twin-Screw extrusion (TSE) can overcome these limitations. Additionally, TSE enables a platform for continuous synthesis or manufacturing as it is an open-ended process, with feedstocks at one end and product at the other. Several materials including metal-organic frameworks (MOFs), co-crystals and small organic molecules have been produced mechanochemically using TSE. The described advantages of TSE are offset by drawbacks such as increased process complexity (a large number of process parameters) and variation in feedstock flow impacting on product quality. To handle the above-mentioned drawbacks, this study utilizes UV-Vis spectrophotometry (InSpectroX, ColVisTec) as an online tool to gain real-time information about the quality of the product. Additionally, this is combined with real-time process information in an Advanced Process Control system (PharmaMV, Perceptive Engineering) allowing full supervision and control of the TSE process. Further, by characterizing the dynamic behavior of the TSE, a model predictive controller (MPC) can be employed to ensure the process remains under control when perturbed by external disturbances. Two reactions were studied; a Knoevenagel condensation reaction of barbituric acid and vanillin and, the direct amidation of hydroquinone by ammonium acetate to form N-Acetyl-para-aminophenol (APAP) commonly known as paracetamol. Both reactions could be carried out continuously using TSE, nuclear magnetic resonance (NMR) spectroscopy was used to confirm the percentage conversion of starting materials to product. This information was used to construct partial least squares (PLS) calibration models within the PharmaMV development system, which relates the percent conversion to product to the acquired UV-Vis spectrum. Once this was complete, the model was deployed within the PharmaMV Real-Time System to carry out automated optimization experiments to maximize the percentage conversion based on a set of process parameters in a design of experiments (DoE) style methodology. With the optimum set of process parameters established, a series of PRBS process response tests (i.e. Pseudo-Random Binary Sequences) around the optimum were conducted. The resultant dataset was used to build a statistical model and associated MPC. The controller maximizes product quality whilst ensuring the process remains at the optimum even as disturbances such as raw material variability are introduced into the system. To summarize, a combination of online spectral monitoring and advanced process control was used to develop a robust system for optimization and control of two TSE based mechanosynthetic processes.

Keywords: continuous synthesis, pharmaceutical, spectroscopy, advanced process control

Procedia PDF Downloads 158
19535 Application of Building Information Modeling in Energy Management of Individual Departments Occupying University Facilities

Authors: Kung-Jen Tu, Danny Vernatha

Abstract:

To assist individual departments within universities in their energy management tasks, this study explores the application of Building Information Modeling in establishing the ‘BIM based Energy Management Support System’ (BIM-EMSS). The BIM-EMSS consists of six components: (1) sensors installed for each occupant and each equipment, (2) electricity sub-meters (constantly logging lighting, HVAC, and socket electricity consumptions of each room), (3) BIM models of all rooms within individual departments’ facilities, (4) data warehouse (for storing occupancy status and logged electricity consumption data), (5) building energy management system that provides energy managers with various energy management functions, and (6) energy simulation tool (such as eQuest) that generates real time 'standard energy consumptions' data against which 'actual energy consumptions' data are compared and energy efficiency evaluated. Through the building energy management system, the energy manager is able to (a) have 3D visualization (BIM model) of each room, in which the occupancy and equipment status detected by the sensors and the electricity consumptions data logged are displayed constantly; (b) perform real time energy consumption analysis to compare the actual and standard energy consumption profiles of a space; (c) obtain energy consumption anomaly detection warnings on certain rooms so that energy management corrective actions can be further taken (data mining technique is employed to analyze the relation between space occupancy pattern with current space equipment setting to indicate an anomaly, such as when appliances turn on without occupancy); and (d) perform historical energy consumption analysis to review monthly and annually energy consumption profiles and compare them against historical energy profiles. The BIM-EMSS was further implemented in a research lab in the Department of Architecture of NTUST in Taiwan and implementation results presented to illustrate how it can be used to assist individual departments within universities in their energy management tasks.

Keywords: database, electricity sub-meters, energy anomaly detection, sensor

Procedia PDF Downloads 291
19534 Forms of Social Provision for Housing Investments in Local Planning Acts for European Capitals: Comparative Study and Spatial References

Authors: Agata Twardoch

Abstract:

The processes of commodification of real estate and changes in housing markets have led to a situation where the prices of free market housing in European capitals are significantly higher than the purchasing value of average wages. This phenomenon has many negative social and spatial consequences. At the same time, the attractiveness of real estate as an asset makes these processes progress. Out of concern for sustainable social development, city authorities apply solutions to balance the burdensome effects of codification of housing. One of them is a social provision for housing investments. The article presents a comparative study of solutions applied in selected European capitals, on the example of Warsaw, Paris, London, Berlin, Copenhagen, and Vienna. The study was conducted along with works on expert report for the master plan for Warsaw. The forms of commissions applied in Local Planning Acts were compared, with particular reference to spatial solutions. The results of the analysis made it possible to determine common features of the solutions applied and to establish recommendations for further practice. Major findings of the study indicate that requirement of social provision is achievable in spatial planning documents. Study shows that application of social provision in private housing investments is a useful tool in housing policy against commodification.

Keywords: affordable housing, housing provision, spatial planning, sustainable social development

Procedia PDF Downloads 156
19533 The Potential of 48V HEV in Real Driving

Authors: Mark Schudeleit, Christian Sieg, Ferit Küçükay

Abstract:

This paper describes how to dimension the electric components of a 48V hybrid system considering real customer use. Furthermore, it provides information about savings in energy and CO2 emissions by a customer-tailored 48V hybrid. Based on measured customer profiles, the electric units such as the electric motor and the energy storage are dimensioned. Furthermore, the CO2 reduction potential in real customer use is determined compared to conventional vehicles. Finally, investigations are carried out to specify the topology design and preliminary considerations in order to hybridize a conventional vehicle with a 48V hybrid system. The emission model results from an empiric approach also taking into account the effects of engine dynamics on emissions. We analyzed transient engine emissions during representative customer driving profiles and created emission meta models. The investigation showed a significant difference in emissions when simulating realistic customer driving profiles using the created verified meta models compared to static approaches which are commonly used for vehicle simulation.

Keywords: customer use, dimensioning, hybrid electric vehicles, vehicle simulation, 48V hybrid system

Procedia PDF Downloads 493
19532 Energy Trading for Cooperative Microgrids with Renewable Energy Resources

Authors: Ziaullah, Shah Wahab Ali

Abstract:

Micro-grid equipped with heterogeneous energy resources present the idea of small scale distributed energy management (DEM). DEM helps in minimizing the transmission and operation costs, power management and peak load demands. Micro-grids are collections of small, independent controllable power-generating units and renewable energy resources. Micro-grids also motivate to enable active customer participation by giving accessibility of real-time information and control to the customer. The capability of fast restoration against faulty situation, integration of renewable energy resources and Information and Communication Technologies (ICT) make micro-grid as an ideal system for distributed power systems. Micro-grids can have a bank of energy storage devices. The energy management system of micro-grid can perform real-time energy forecasting of renewable resources, energy storage elements and controllable loads in making proper short-term scheduling to minimize total operating costs. We present a review of existing micro-grids optimization objectives/goals, constraints, solution approaches and tools used in micro-grids for energy management. Cost-benefit analysis of micro-grid reveals that cooperation among different micro-grids can play a vital role in the reduction of import energy cost and system stability. Cooperative micro-grids energy trading is an approach to electrical distribution energy resources that allows local energy demands more control over the optimization of power resources and uses. Cooperation among different micro-grids brings the interconnectivity and power trading issues. According to the literature, it shows that open area of research is available for cooperative micro-grids energy trading. In this paper, we proposed and formulated the efficient energy management/trading module for interconnected micro-grids. It is believed that this research will open new directions in future for energy trading in cooperative micro-grids/interconnected micro-grids.

Keywords: distributed energy management, information and communication technologies, microgrid, energy management

Procedia PDF Downloads 364
19531 Unveiling the Black Swan of the Inflation-Adjusted Real Excess Returns-Risk Nexus: Evidence From Pakistan Stock Exchange

Authors: Mohammad Azam

Abstract:

The purpose of this study is to investigate risk and real excess portfolio returns using inflation adjusted risk-free rates, a measuring technique that focuses on the momentum augmented Fama-French six-factor model and use monthly data from 1994 to 2022. With the exception of profitability, the data show that market, size, value, momentum, and investment factors are all strongly associated to excess portfolio stock returns using ordinary lease square regression technique. According to the Gibbons, Ross, and Shanken test, the momentum augmented Fama-French six-factor model outperforms the market. This technique discovery may be utilised by academics and professionals to acquire an in-depth knowledge of the Pakistan Stock Exchange across a broad stock pattern for investing decisions and portfolio construction.

Keywords: real excess portfolio returns, momentum augmented fama & french five-factor model, GRS-test, pakistan stock exchange

Procedia PDF Downloads 91
19530 A Method for Measurement and Evaluation of Drape of Textiles

Authors: L. Fridrichova, R. Knížek, V. Bajzík

Abstract:

Drape is one of the important visual characteristics of the fabric. This paper is introducing an innovative method of measurement and evaluation of the drape shape of the fabric. The measuring principle is based on the possibility of multiple vertical strain of the fabric. This method more accurately simulates the real behavior of the fabric in the process of draping. The method is fully automated, so the sample can be measured by using any number of cycles in any time horizon. Using the present method of measurement, we are able to describe the viscoelastic behavior of the fabric.

Keywords: drape, drape shape, automated drapemeter, fabric

Procedia PDF Downloads 642
19529 Deployed Confidence: The Testing in Production

Authors: Shreya Asthana

Abstract:

Testers know that the feature they tested on stage is working perfectly in production only after release went live. Sometimes something breaks in production and testers get to know through the end user’s bug raised. The panic mode starts when your staging test results do not reflect current production behavior. And you started doubting your testing skills when finally the user reported a bug to you. Testers can deploy their confidence on release day by testing on production. Once you start doing testing in production, you will see test result accuracy because it will be running on real time data and execution will be a little faster as compared to staging one due to elimination of bad data. Feature flagging, canary releases, and data cleanup can help to achieve this technique of testing. By this paper it will be easier to understand the steps to achieve production testing before making your feature live, and to modify IT company’s testing procedure, so testers can provide the bug free experience to the end users. This study is beneficial because too many people think that testing should be done in staging but not in production and now this is high time to pull out people from their old mindset of testing into a new testing world. At the end of the day, it all just matters if the features are working in production or not.

Keywords: bug free production, new testing mindset, testing strategy, testing approach

Procedia PDF Downloads 53
19528 Tumor Cell Detection, Isolation and Monitoring Using Bi-Layer Magnetic Microfluidic Chip

Authors: Amir Seyfoori, Ehsan Samiei, Mohsen Akbari

Abstract:

The use of microtechnology for detection and high yield isolation of circulating tumor cells (CTCs) has shown enormous promise as an indication of clinical metastasis prognosis and cancer treatment monitoring. The Immunomagnetic assay has been also coupled to microtechnology to improve the selectivity and efficiency of the current methods of cancer biomarker isolation. In this way, generation and configuration of the local high gradient magnetic field play essential roles in such assay. Additionally, considering the intrinsic heterogeneity of cancer cells, real-time analysis of isolated cells is necessary to characterize their responses to therapy. Totally, on-chip isolation and monitoring of the specific tumor cells is considered as a pressing need in the way of modified cancer therapy. To address these challenges, we have developed a bi-layer magnetic-based microfluidic chip for enhanced CTC detection and capturing. Micromagnet arrays at the bottom layer of the chip were fabricated using a new method of magnetic nanoparticle paste deposition so that they were arranged at the center of the chain microchannel with the lowest fluid velocity zone. Breast cancer cells labelled with EPCAM-conjugated smart microgels were immobilized on the tip of the micromagnets with greater localized magnetic field and stronger cell-micromagnet interaction. Considering different magnetic nano-powder usage (MnFe2O4 & gamma-Fe2O3) and micromagnet shapes (ellipsoidal & arrow), the capture efficiency of the systems was adjusted while the higher CTC capture efficiency was acquired for MnFe2O4 arrow micromagnet as around 95.5%. As a proof of concept of on-chip tumor cell monitoring, magnetic smart microgels made of thermo-responsive poly N-isopropylacrylamide-co-acrylic acid (PNIPAM-AA) composition were used for both purposes of targeted cell capturing as well as cell monitoring using antibody conjugation and fluorescent dye loading at the same time. In this regard, magnetic microgels were successfully used as cell tracker after isolation process so that by raising the temperature up to 37⁰ C, they released the contained dye and stained the targeted cell just after capturing. This microfluidic device was able to provide a platform for detection, isolation and efficient real-time analysis of specific CTCs in the liquid biopsy of breast cancer patients.

Keywords: circulating tumor cells, microfluidic, immunomagnetic, cell isolation

Procedia PDF Downloads 133
19527 Technology Valuation of Unconventional Gas R&D Project Using Real Option Approach

Authors: Young Yoon, Jinsoo Kim

Abstract:

The adoption of information and communication technologies (ICT) in all industry is growing under industry 4.0. Many oil companies also are increasingly adopting ICT to improve the efficiency of existing operations, take more accurate and quicker decision making and reduce entire cost by optimization. It is true that ICT is playing an important role in the process of unconventional oil and gas development and companies must take advantage of ICT to gain competitive advantage. In this study, real option approach has been applied to Unconventional gas R&D project to evaluate ICT of them. Many unconventional gas reserves such as shale gas and coal-bed methane(CBM) has developed due to technological improvement and high energy price. There are many uncertainties in unconventional development on the three stage(Exploration, Development, Production). The traditional quantitative benefits-cost method, such as net present value(NPV) is not sufficient for capturing ICT value. We attempted to evaluate the ICT valuation by applying the compound option model; the model is applied to real CBM project case, showing how it consider uncertainties. Variables are treated as uncertain and a Monte Carlo simulation is performed to consider variables effect. Acknowledgement—This work was supported by the Energy Efficiency & Resources Core Technology Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (No. 20152510101880) and by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-205S1A3A2046684).

Keywords: information and communication technologies, R&D, real option, unconventional gas

Procedia PDF Downloads 220
19526 Unreality of Real: Debordean Reading of Gillian Flynn's Gone Girl

Authors: Sahand Hamed Moeel Ardebil, Zohreh Taebi Noghondari, Mahmood Reza Ghorban Sabbagh

Abstract:

Gillian Flynn’s Gone Girl, depicts a society in which, as a result of media dominance, the reality is very precarious and difficult to grasp. In Gone Girl, reality and image of reality represented on TV, are challenging to differentiate. Along with reality, individuals’ agency and independence before media and the capitalist rule are called in to question in the novel. In order to expose the unstable nature of reality and an individual’s complicated relationship with media, this study has deployed the ideas of Marxist-media theorist Guy Debord (1931-1992). In his book Society of the Spectacle (1966), Debord delineates a society in which images replace the objective reality, and people are incapable of making real changes. The results of the current study show that despite their efforts, Nick and Amy, the two main characters of the novel, are no more than spectators with very little agency before the media. Moreover, following Debord’s argument about the replacement of reality with images, everyone and every institution in Gone Girl projects an image that does not necessarily embody the objective reality, a fact that makes it very hard to differentiate the real from unreal.

Keywords: agency, Debord, Gone Girl, media studies, society of spectacle, reality

Procedia PDF Downloads 108
19525 Evaluation of Diagnosis Performance Based on Pairwise Model Construction and Filtered Data

Authors: Hyun-Woo Cho

Abstract:

It is quite important to utilize right time and intelligent production monitoring and diagnosis of industrial processes in terms of quality and safety issues. When compared with monitoring task, fault diagnosis represents the task of finding process variables responsible causing a specific fault in the process. It can be helpful to process operators who should investigate and eliminate root causes more effectively and efficiently. This work focused on the active use of combining a nonlinear statistical technique with a preprocessing method in order to implement practical real-time fault identification schemes for data-rich cases. To compare its performance to existing identification schemes, a case study on a benchmark process was performed in several scenarios. The results showed that the proposed fault identification scheme produced more reliable diagnosis results than linear methods. In addition, the use of the filtering step improved the identification results for the complicated processes with massive data sets.

Keywords: diagnosis, filtering, nonlinear statistical techniques, process monitoring

Procedia PDF Downloads 225
19524 Study on Safety Management of Deep Foundation Pit Construction Site Based on Building Information Modeling

Authors: Xuewei Li, Jingfeng Yuan, Jianliang Zhou

Abstract:

The 21st century has been called the century of human exploitation of underground space. Due to the characteristics of large quantity, tight schedule, low safety reserve and high uncertainty of deep foundation pit engineering, accidents frequently occur in deep foundation pit engineering, causing huge economic losses and casualties. With the successful application of information technology in the construction industry, building information modeling has become a research hotspot in the field of architectural engineering. Therefore, the application of building information modeling (BIM) and other information communication technologies (ICTs) in construction safety management is of great significance to improve the level of safety management. This research summed up the mechanism of the deep foundation pit engineering accident through the fault tree analysis to find the control factors of deep foundation pit engineering safety management, the deficiency existing in the traditional deep foundation pit construction site safety management. According to the accident cause mechanism and the specific process of deep foundation pit construction, the hazard information of deep foundation pit engineering construction site was identified, and the hazard list was obtained, including early warning information. After that, the system framework was constructed by analyzing the early warning information demand and early warning function demand of the safety management system of deep foundation pit. Finally, the safety management system of deep foundation pit construction site based on BIM through combing the database and Web-BIM technology was developed, so as to realize the three functions of real-time positioning of construction site personnel, automatic warning of entering a dangerous area, real-time monitoring of deep foundation pit structure deformation and automatic warning. This study can initially improve the current situation of safety management in the construction site of deep foundation pit. Additionally, the active control before the occurrence of deep foundation pit accidents and the whole process dynamic control in the construction process can be realized so as to prevent and control the occurrence of safety accidents in the construction of deep foundation pit engineering.

Keywords: Web-BIM, safety management, deep foundation pit, construction

Procedia PDF Downloads 140
19523 Rapid, Direct, Real-Time Method for Bacteria Detection on Surfaces

Authors: Evgenia Iakovleva, Juha Koivisto, Pasi Karppinen, J. Inkinen, Mikko Alava

Abstract:

Preventing the spread of infectious diseases throughout the worldwide is one of the most important tasks of modern health care. Infectious diseases not only account for one fifth of the deaths in the world, but also cause many pathological complications for the human health. Touch surfaces pose an important vector for the spread of infections by varying microorganisms, including antimicrobial resistant organisms. Further, antimicrobial resistance is reply of bacteria to the overused or inappropriate used of antibiotics everywhere. The biggest challenges in bacterial detection by existing methods are non-direct determination, long time of analysis, the sample preparation, use of chemicals and expensive equipment, and availability of qualified specialists. Therefore, a high-performance, rapid, real-time detection is demanded in rapid practical bacterial detection and to control the epidemiological hazard. Among the known methods for determining bacteria on the surfaces, Hyperspectral methods can be used as direct and rapid methods for microorganism detection on different kind of surfaces based on fluorescence without sampling, sample preparation and chemicals. The aim of this study was to assess the relevance of such systems to remote sensing of surfaces for microorganisms detection to prevent a global spread of infectious diseases. Bacillus subtilis and Escherichia coli with different concentrations (from 0 to 10x8 cell/100µL) were detected with hyperspectral camera using different filters as visible visualization of bacteria and background spots on the steel plate. A method of internal standards was applied for monitoring the correctness of the analysis results. Distances from sample to hyperspectral camera and light source are 25 cm and 40 cm, respectively. Each sample is optically imaged from the surface by hyperspectral imaging system, utilizing a JAI CM-140GE-UV camera. Light source is BeamZ FLATPAR DMX Tri-light, 3W tri-colour LEDs (red, blue and green). Light colors are changed through DMX USB Pro interface. The developed system was calibrated following a standard procedure of setting exposure and focused for light with λ=525 nm. The filter is ThorLabs KuriousTM hyperspectral filter controller with wavelengths from 420 to 720 nm. All data collection, pro-processing and multivariate analysis was performed using LabVIEW and Python software. The studied human eye visible and invisible bacterial stains clustered apart from a reference steel material by clustering analysis using different light sources and filter wavelengths. The calculation of random and systematic errors of the analysis results proved the applicability of the method in real conditions. Validation experiments have been carried out with photometry and ATP swab-test. The lower detection limit of developed method is several orders of magnitude lower than for both validation methods. All parameters of the experiments were the same, except for the light. Hyperspectral imaging method allows to separate not only bacteria and surfaces, but also different types of bacteria, such as Gram-negative Escherichia coli and Gram-positive Bacillus subtilis. Developed method allows skipping the sample preparation and the use of chemicals, unlike all other microbiological methods. The time of analysis with novel hyperspectral system is a few seconds, which is innovative in the field of microbiological tests.

Keywords: Escherichia coli, Bacillus subtilis, hyperspectral imaging, microorganisms detection

Procedia PDF Downloads 207
19522 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 57