Search results for: force sensor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3477

Search results for: force sensor

327 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin

Authors: Julio Jesus Salazar, Julio Jesus De Lama

Abstract:

the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.

Keywords: hydrology, internet of things, machine learning, river basin

Procedia PDF Downloads 131
326 The Impact of Reducing Road Traffic Speed in London on Noise Levels: A Comparative Study of Field Measurement and Theoretical Calculation

Authors: Jessica Cecchinelli, Amer Ali

Abstract:

The continuing growth in road traffic and the resultant impact on the level of pollution and safety especially in urban areas have led local and national authorities to reduce traffic speed and flow in major towns and cities. Various boroughs of London have recently reduced the in-city speed limit from 30mph to 20mph mainly to calm traffic, improve safety and reduce noise and vibration. This paper reports the detailed field measurements using noise sensor and analyser and the corresponding theoretical calculations and analysis of the noise levels on a number of roads in the central London Borough of Camden where speed limit was reduced from 30mph to 20mph in all roads except the major routes of the ‘Transport for London (TfL)’. The measurements, which included the key noise levels and scales at residential streets and main roads, were conducted during weekdays and weekends normal and rush hours. The theoretical calculations were done according to the UK procedure ‘Calculation of Road Traffic Noise 1988’ and with conversion to the European L-day, L-evening, L-night, and L-den and other important levels. The current study also includes comparable data and analysis from previously measured noise in the Borough of Camden and other boroughs of central London. Classified traffic flow and speed on the roads concerned were observed and used in the calculation part of the study. Relevant data and description of the weather condition are reported. The paper also reports a field survey in the form of face-to-face interview questionnaires, which was carried out in parallel with the field measurement of noise, in order to ascertain the opinions and views of local residents and workers in the reduced speed zones of 20mph. The main findings are that the reduction in speed had reduced the noise pollution on the studied zones and that the measured and calculated noise levels for each speed zone are closely matched. Among the other findings was that of the field survey of the opinions and views of the local residents and workers in the reduced speed 20mph zones who supported the scheme and felt that it had improved the quality of life in their areas giving a sense of calmness and safety particularly for families with children, the elderly, and encouraged pedestrians and cyclists. The key conclusions are that lowering the speed limit in built-up areas would not just reduce the number of serious accidents but it would also reduce the noise pollution and promote clean modes of transport particularly walking and cycling. The details of the site observations and the corresponding calculations together with critical comparative analysis and relevant conclusions will be reported in the full version of the paper.

Keywords: noise calculation, noise field measurement, road traffic noise, speed limit in london, survey of people satisfaction

Procedia PDF Downloads 407
325 Improved Signal-To-Noise Ratio by the 3D-Functionalization of Fully Zwitterionic Surface Coatings

Authors: Esther Van Andel, Stefanie C. Lange, Maarten M. J. Smulders, Han Zuilhof

Abstract:

False outcomes of diagnostic tests are a major concern in medical health care. To improve the reliability of surface-based diagnostic tests, it is of crucial importance to diminish background signals that arise from the non-specific binding of biomolecules, a process called fouling. The aim is to create surfaces that repel all biomolecules except the molecule of interest. This can be achieved by incorporating antifouling protein repellent coatings in between the sensor surface and it’s recognition elements (e.g. antibodies, sugars, aptamers). Zwitterionic polymer brushes are considered excellent antifouling materials, however, to be able to bind the molecule of interest, the polymer brushes have to be functionalized and so far this was only achieved at the expense of either antifouling or binding capacity. To overcome this limitation, we combined both features into one single monomer: a zwitterionic sulfobetaine, ensuring antifouling capabilities, equipped with a clickable azide moiety which allows for further functionalization. By copolymerizing this monomer together with a standard sulfobetaine, the number of azides (and with that the number of recognition elements) can be tuned depending on the application. First, the clickable azido-monomer was synthesized and characterized, followed by copolymerizing this monomer to yield functionalizable antifouling brushes. The brushes were fully characterized using surface characterization techniques like XPS, contact angle measurements, G-ATR-FTIR and XRR. As a proof of principle, the brushes were subsequently functionalized with biotin via strain-promoted alkyne azide click reactions, which yielded a fully zwitterionic biotin-containing 3D-functionalized coating. The sensing capacity was evaluated by reflectometry using avidin and fibrinogen containing protein solutions. The surfaces showed excellent antifouling properties as illustrated by the complete absence of non-specific fibrinogen binding, while at the same time clear responses were seen for the specific binding of avidin. A great increase in signal-to-noise ratio was observed, even when the amount of functional groups was lowered to 1%, compared to traditional modification of sulfobetaine brushes that rely on a 2D-approach in which only the top-layer can be functionalized. This study was performed on stoichiometric silicon nitride surfaces for future microring resonator based assays, however, this methodology can be transferred to other biosensor platforms which are currently being investigated. The approach presented herein enables a highly efficient strategy for selective binding with retained antifouling properties for improved signal-to-noise ratios in binding assays. The number of recognition units can be adjusted to a specific need, e.g. depending on the size of the analyte to be bound, widening the scope of these functionalizable surface coatings.

Keywords: antifouling, signal-to-noise ratio, surface functionalization, zwitterionic polymer brushes

Procedia PDF Downloads 286
324 Blade-Coating Deposition of Semiconducting Polymer Thin Films: Light-To-Heat Converters

Authors: M. Lehtihet, S. Rosado, C. Pradère, J. Leng

Abstract:

Poly(3,4-ethylene dioxythiophene) polystyrene sulfonate (PEDOT: PSS), is a polymer mixture well-known for its semiconducting properties and is widely used in the coating industry for its visible transparency and high electronic conductivity (up to 4600 S/cm) as a transparent non-metallic electrode and in organic light-emitting diodes (OLED). It also possesses strong absorption properties in the Near Infra-Red (NIR) range (λ ranging between 900 nm to 2.5 µm). In the present work, we take advantage of this absorption to explore its potential use as a transparent light-to-heat converter. PEDOT: PSS aqueous dispersions are deposited onto a glass substrate using a blade-coating technique in order to produce uniform coatings with controlled thicknesses ranging in ≈ 400 nm to 2 µm. Blade-coating technique allows us good control of the deposit thickness and uniformity by the tuning of several experimental conditions (blade velocity, evaporation rate, temperature, etc…). This liquid coating technique is a well-known, non-expensive technique to realize thin film coatings on various substrates. For coatings on glass substrates destined to solar insulation applications, the ideal coating would be made of a material able to transmit all the visible range while reflecting the NIR range perfectly, but materials possessing similar properties still have unsatisfactory opacity in the visible too (for example, titanium dioxide nanoparticles). NIR absorbing thin films is a more realistic alternative for such an application. Under solar illumination, PEDOT: PSS thin films heat up due to absorption of NIR light and thus act as planar heaters while maintaining good transparency in the visible range. Whereas they screen some NIR radiation, they also generate heat which is then conducted into the substrate that re-emits this energy by thermal emission in every direction. In order to quantify the heating power of these coatings, a sample (coating on glass) is placed in a black enclosure and illuminated with a solar simulator, a lamp emitting a calibrated radiation very similar to the solar spectrum. The temperature of the rear face of the substrate is measured in real-time using thermocouples and a black-painted Peltier sensor measures the total entering flux (sum of transmitted and re-emitted fluxes). The heating power density of the thin films is estimated from a model of the thin film/glass substrate describing the system, and we estimate the Solar Heat Gain Coefficient (SHGC) to quantify the light-to-heat conversion efficiency of such systems. Eventually, the effect of additives such as dimethyl sulfoxide (DMSO) or optical scatterers (particles) on the performances are also studied, as the first one can alter the IR absorption properties of PEDOT: PSS drastically and the second one can increase the apparent optical path of light within the thin film material.

Keywords: PEDOT: PSS, blade-coating, heat, thin-film, Solar spectrum

Procedia PDF Downloads 137
323 Establishment of Precision System for Underground Facilities Based on 3D Absolute Positioning Technology

Authors: Yonggu Jang, Jisong Ryu, Woosik Lee

Abstract:

The study aims to address the limitations of existing underground facility exploration equipment in terms of exploration depth range, relative depth measurement, data processing time, and human-centered ground penetrating radar image interpretation. The study proposed the use of 3D absolute positioning technology to develop a precision underground facility exploration system. The aim of this study is to establish a precise exploration system for underground facilities based on 3D absolute positioning technology, which can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The study developed software and hardware technologies to build the precision exploration system. The software technologies developed include absolute positioning technology, ground surface location synchronization technology of GPR exploration equipment, GPR exploration image AI interpretation technology, and integrated underground space map-based composite data processing technology. The hardware systems developed include a vehicle-type exploration system and a cart-type exploration system. The data was collected using the developed exploration system, which employs 3D absolute positioning technology. The GPR exploration images were analyzed using AI technology, and the three-dimensional location information of the explored precise underground facilities was compared to the integrated underground space map. The study successfully developed a precision underground facility exploration system based on 3D absolute positioning technology. The developed exploration system can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The system comprises software technologies that build a 3D precise DEM, synchronize the GPR sensor's ground surface 3D location coordinates, automatically analyze and detect underground facility information in GPR exploration images and improve accuracy through comparative analysis of the three-dimensional location information, and hardware systems, including a vehicle-type exploration system and a cart-type exploration system. The study's findings and technological advancements are essential for underground safety management in Korea. The proposed precision exploration system significantly contributes to establishing precise location information of underground facility information, which is crucial for underground safety management and improves the accuracy and efficiency of exploration. The study addressed the limitations of existing equipment in exploring underground facilities, proposed 3D absolute positioning technology-based precision exploration system, developed software and hardware systems for the exploration system, and contributed to underground safety management by providing precise location information. The developed precision underground facility exploration system based on 3D absolute positioning technology has the potential to provide accurate and efficient exploration of underground facilities up to a depth of 5m. The system's technological advancements contribute to the establishment of precise location information of underground facility information, which is essential for underground safety management in Korea.

Keywords: 3D absolute positioning, AI interpretation of GPR exploration images, complex data processing, integrated underground space maps, precision exploration system for underground facilities

Procedia PDF Downloads 42
322 Analysis of Delays during Initial Phase of Construction Projects and Mitigation Measures

Authors: Sunaitan Al Mutairi

Abstract:

A perfect start is a key factor for project completion on time. The study examined the effects of delayed mobilization of resources during the initial phases of the project. This paper mainly highlights the identification and categorization of all delays during the initial construction phase and their root cause analysis with corrective/control measures for the Kuwait Oil Company oil and gas projects. A relatively good percentage of the delays identified during the project execution (Contract award to end of defects liability period) attributed to mobilization/preliminary activity delays. Data analysis demonstrated significant increase in average project delay during the last five years compared to the previous period. Contractors had delays/issues during the initial phase, which resulted in slippages and progressively increased, resulting in time and cost overrun. Delays/issues not mitigated on time during the initial phase had very high impact on project completion. Data analysis of the delays for the past five years was carried out using trend chart, scatter plot, process map, box plot, relative importance index and Pareto chart. Construction of any project inside the Gathering Centers involves complex management skills related to work force, materials, plant, machineries, new technologies etc. Delay affects completion of projects and compromises quality, schedule and budget of project deliverables. Works executed as per plan during the initial phase and start-up duration of the project construction activities resulted in minor slippages/delays in project completion. In addition, there was a good working environment between client and contractor resulting in better project execution and management. Mainly, the contractor was on the front foot in the execution of projects, which had minimum/no delays during the initial and construction period. Hence, having a perfect start during the initial construction phase shall have a positive influence on the project success. Our research paper studies each type of delay with some real example supported by statistic results and suggests mitigation measures. Detailed analysis carried out with all stakeholders based on impact and occurrence of delays to have a practical and effective outcome to mitigate the delays. The key to improvement is to have proper control measures and periodic evaluation/audit to ensure implementation of the mitigation measures. The focus of this research is to reduce the delays encountered during the initial construction phase of the project life cycle.

Keywords: construction activities delays, delay analysis for construction projects, mobilization delays, oil & gas projects delays

Procedia PDF Downloads 290
321 Experimental Field for the Study of Soil-Atmosphere Interaction in Soft Soils

Authors: Andres Mejia-Ortiz, Catalina Lozada, German R. Santos, Rafael Angulo-Jaramillo, Bernardo Caicedo

Abstract:

The interaction between atmospheric variables and soil properties is a determining factor when evaluating the flow of water through the soil. This interaction situation directly determines the behavior of the soil and greatly influences the changes that occur in it. The atmospheric variations such as changes in the relative humidity, air temperature, wind velocity and precipitation, are the external variables that reflect a greater incidence in the changes that are generated in the subsoil, as a consequence of the water flow in descending and ascending conditions. These environmental variations have a major importance in the study of the soil because the conditions of humidity and temperature in the soil surface depend on them. In addition, these variations control the thickness of the unsaturated zone and the position of the water table with respect to the surface. However, understanding the relationship between the atmosphere and the soil is a somewhat complex aspect. This is mainly due to the difficulty involved in estimating the changes that occur in the soil from climate changes; since this is a coupled process where act processes of mass transfer and heat. In this research, an experimental field was implemented to study in-situ the interaction between the atmosphere and the soft soils of the city of Bogota, Colombia. The soil under study consists of a 60 cm layer composed of two silts of similar characteristics at the surface and a deep soft clay deposit located under the silky material. It should be noted that the vegetal layer and organic matter were removed to avoid the evapotranspiration phenomenon. Instrumentation was carried on in situ through a field disposal of many measuring devices such as soil moisture sensors, thermocouples, relative humidity sensors, wind velocity sensor, among others; which allow registering the variations of both the atmospheric variables and the properties of the soil. With the information collected through field monitoring, the water balances were made using the Hydrus-1D software to determine the flow conditions that developed in the soil during the study. Also, the moisture profile for different periods and time intervals was determined by the balance supplied by Hydrus 1D; this profile was validated by experimental measurements. As a boundary condition, the actual evaporation rate was included using the semi-empirical equations proposed by different authors. In this study, it was obtained for the rainy periods a descending flow that was governed by the infiltration capacity of the soil. On the other hand, during dry periods. An increase in the actual evaporation of the soil induces an upward flow of water, increasing suction due to the decrease in moisture content. Also, cracks were developed accelerating the evaporation process. This work concerns to the study of soil-atmosphere interaction through the experimental field and it is a very useful tool since it allows considering all the factors and parameters of the soil in its natural state and real values of the different environmental conditions.

Keywords: field monitoring, soil-atmosphere, soft soils, soil-water balance

Procedia PDF Downloads 115
320 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 414
319 The Use of Technology in Theatrical Performances as a Tool of Audience’S Engagement

Authors: Chrysoula Bousiouta

Abstract:

Throughout the history of theatre, technology has played an important role both in influencing the relationship between performance and audience and offering different kinds of experiences. The use of technology dates back in ancient times, when the introduction of artifacts, such as “Deus ex machine” in ancient Greek theatre, started. Taking into account the key techniques and experiences used throughout history, this paper investigates how technology, through new media, influences contemporary theatre. In the context of this research, technology is defined as projections, audio environments, video-projections, sensors, tele-connections, all alongside with the performance, challenging audience’s participation. The theoretical framework of the research covers, except for the history of theatre, the theory of “experience economy” that took over the service and goods economy. The research is based on the qualitative and comparative analysis of two case studies, Contact Theatre in Manchester (United Kingdom) and Bios in Athens (Greece). The data selection includes desk research and is complemented with semi structured interviews. Building on the results of the research one could claim that the intended experience of modern/contemporary theatre is that of engagement. In this context, technology -as defined above- plays a leading role in creating it. This experience passes through and exists in the middle of the realms of entertainment, education, estheticism and escapism. Furthermore, it is observed that nowadays, theatre is not only about acting but also about performing; it is that one where the performances are unfinished without the participation of the audience. Both case studies try to achieve the experience of engagement through practices that promote the attraction of attention, the increase of imagination, the interaction, the intimacy and the true activity. These practices are achieved through the script, the scenery, the language and the environment of a performance. Contact and Bios consider technology as an intimate tool in order to accomplish the above, and they make an extended use of it. The research completes a notable record of technological techniques that modern theatres use. The use of technology, inside or outside the limits of film technique’s, helps to rivet the attention of the audience, to make performances enjoyable, to give the sense of the “unfinished” or to be used for things that take place around the spectators and force them to take action, being spect-actors. The advantage of technology is that it can be used as a hook for interaction in all stages of a performance. Further research on the field could involve exploring alternative ways of binding technology and theatre or analyzing how the performance is perceived through the use of technological artifacts.

Keywords: experience of engagement, interactive theatre, modern theatre, performance, technology

Procedia PDF Downloads 228
318 Study of the Combinatorial Impact of Substrate Properties on Mesenchymal Stem Cell Migration Using Microfluidics

Authors: Nishanth Venugopal Menon, Chuah Yon Jin, Samantha Phey, Wu Yingnan, Zhang Ying, Vincent Chan, Kang Yuejun

Abstract:

Cell Migration is a vital phenomenon that the cells undergo in various physiological processes like wound healing, disease progression, embryogenesis, etc. Cell migration depends primarily on the chemical and physical cues available in the cellular environment. The chemical cue involves the chemokines secreted and gradients generated in the environment while physical cues indicate the impact of matrix properties like nanotopography and stiffness on the cells. Mesenchymal Stem Cells (MSCs) have been shown to have a role wound healing in vivo and its migration to the site of the wound has been shown to have a therapeutic effect. In the field of stem cell based tissue regeneration of bones and cartilage, one approach has been to introduce scaffold laden with MSCs into the site of injury to enable tissue regeneration. In this work, we have studied the combinatorial impact of the substrate physical properties on MSC migration. A microfluidic in vitro model was created to perform the migration studies. The microfluidic model used is a three compartment device consisting of two cell seeding compartments and one migration compartment. Four different PDMS substrates with varying substrate roughness, stiffness and hydrophobicity were created. Its surface roughness and stiffness was measured using Atomic Force Microscopy (AFM) while its hydrphobicity was measured from the water contact angle using an optical tensiometer. These PDMS substrates are sealed to the microfluidic chip following which the MSCs are seeded and the cell migration is studied over the period of a week. Cell migration was quantified using fluorescence imaging of the cytoskeleton (F-actin) to find out the area covered by the cells inside the migration compartment. The impact of adhesion proteins on cell migration was also quantified using a real-time polymerase chain reaction (qRT PCR). These results suggested that the optimal substrate for cell migration would be one with an intermediate level of roughness, stiffness and hydrophobicity. A higher or lower value of these properties affected cell migration negatively. These observations have helped us in understanding that different substrate properties need to be considered in tandem, especially while designing scaffolds for tissue regeneration as cell migration is normally impacted by the combinatorial impact of the matrix. These observations may lead us to scaffold optimization in future tissue regeneration applications.

Keywords: cell migration, microfluidics, in vitro model, stem cell migration, scaffold, substrate properties

Procedia PDF Downloads 534
317 The Connection between De Minimis Rule and the Effect on Trade

Authors: Pedro Mario Gonzalez Jimenez

Abstract:

The novelties introduced by the last Notice on agreements of minor importance tighten the application of the ‘De minimis’ safe harbour in the European Union. However, the undetermined legal concept of effect on trade between the Member States becomes importance at the same time. Therefore, the current analysis that the jurist should carry out in the European Union to determine if an agreement appreciably restrict competition under Article 101 of the Treaty on the Functioning of the European Union is double. Hence, it is necessary to know how to balance the significance in competition and the significance in effect on trade between the Member States. It is a crucial issue due to the negative delimitation of restriction of competition affects the positive one. The methodology of this research is rather simple. Beginning with a historical approach to the ‘De Minimis Rule’, their main problems and uncertainties will be found. So, after the analysis of normative documents and the jurisprudence of the Court of Justice of the European Union some proposals of ‘Lege ferenda’ will be offered. These proposals try to overcome the contradictions and questions that currently exist in the European Union as a consequence of the current legal regime of agreements of minor importance. The main findings of this research are the followings: Firstly, the effect on trade is another way to analyze the importance of an agreement different from the ‘De minimis rule’. In point of fact, this concept is singularly adapted to go through agreements that have as object the prevention, restriction or distortion of competition, as it is observed in the most famous European Union case-law. Thanks to the effect on trade, as long as the proper requirements are met there is no a restriction of competition under article 101 of the Treaty on the Functioning of the European Union, even if the agreement had an anti-competitive object. These requirements are an aggregate market share lower than 5% on any of the relevant markets affected by the agreement and turnover lower than 40 million of Euros. Secondly, as the Notice itself says ‘it is also intended to give guidance to the courts and competition authorities of the Member States in their application of Article 101 of the Treaty, but it has no binding force for them’. This reality makes possible the existence of different statements among the different Member States and a confusing perception of what a restriction of competition is. Ultimately, damage on trade between the Member States could be observed for this reason. The main conclusion is that the significant effect on trade between Member States is irrelevant in agreements that restrict competition because of their effects but crucial in agreements that restrict competition because of their object. Thus, the Member States should propose the incorporation of a similar concept in their legal orders in order to apply the content of the Notice. Otherwise, the significance of the restrictive agreement on competition would not be properly assessed.

Keywords: De minimis rule, effect on trade, minor importance agreements, safe harbour

Procedia PDF Downloads 152
316 Optimization Based Design of Decelerating Duct for Pumpjets

Authors: Mustafa Sengul, Enes Sahin, Sertac Arslan

Abstract:

Pumpjets are one of the marine propulsion systems frequently used in underwater vehicles nowadays. The reasons for frequent use of pumpjet as a propulsion system are that it has higher relative efficiency at high speeds, better cavitation, and acoustic performance than its rivals. Pumpjets are composed of rotor, stator, and duct, and there are two different types of pumpjet configurations depending on the desired hydrodynamic characteristic, which are with accelerating and decelerating duct. Pumpjet with an accelerating channel is used at cargo ships where it works at low speeds and high loading conditions. The working principle of this type of pumpjet is to maximize the thrust by reducing the pressure of the fluid through the channel and throwing the fluid out from the channel with high momentum. On the other hand, for decelerating ducted pumpjets, the main consideration is to prevent the occurrence of the cavitation phenomenon by increasing the pressure of the fluid about the rotor region. By postponing the cavitation, acoustic noise naturally falls down, so decelerating ducted systems are used at noise-sensitive vehicle systems where acoustic performance is vital. Therefore, duct design becomes a crucial step during pumpjet design. This study, it is aimed to optimize the duct geometry of a decelerating ducted pumpjet for a highly speed underwater vehicle by using proper optimization tools. The target output of this optimization process is to obtain a duct design that maximizes fluid pressure around the rotor region to prevent from cavitation and minimizes drag force. There are two main optimization techniques that could be utilized for this process which are parameter-based optimization and gradient-based optimization. While parameter-based algorithm offers more major changes in interested geometry, which makes user to get close desired geometry, gradient-based algorithm deals with minor local changes in geometry. In parameter-based optimization, the geometry should be parameterized first. Then, by defining upper and lower limits for these parameters, design space is created. Finally, by proper optimization code and analysis, optimum geometry is obtained from this design space. For this duct optimization study, a commercial codedparameter-based optimization algorithm is used. To parameterize the geometry, duct is represented with b-spline curves and control points. These control points have x and y coordinates limits. By regarding these limits, design space is generated.

Keywords: pumpjet, decelerating duct design, optimization, underwater vehicles, cavitation, drag minimization

Procedia PDF Downloads 171
315 New Suspension Mechanism for a Formula Car using Camber Thrust

Authors: Shinji Kajiwara

Abstract:

The basic ability of a vehicle is the ability to “run”, “turn” and “stop”. The safeness and comfort during a drive on various road surfaces and speed depends on the performance of these basic abilities of the vehicle. Stability and maneuverability of a vehicle is vital in automotive engineering. Stability of a vehicle is the ability of the vehicle to revert back to a stable state during a drive when faced with crosswind and irregular road conditions. Maneuverability of a vehicle is the ability of the vehicle to change direction during a drive swiftly based on the steering of the driver. The stability and maneuverability of a vehicle can also be defined as the driving stability of the vehicle. Since fossil fueled vehicle is the main type of transportation today, the environmental factor in automotive engineering is also vital. By improving the fuel efficiency of the vehicle, the overall carbon emission will be reduced thus reducing the effect of global warming and greenhouse gas on the Earth. Another main focus of the automotive engineering is the safety performance of the vehicle especially with the worrying increase of vehicle collision every day. With better safety performance on a vehicle, every driver will be more confidence driving every day. Next, let us focus on the “turn” ability of a vehicle. By improving this particular ability of the vehicle, the cornering limit of the vehicle can be improved thus increasing the stability and maneuverability factor. In order to improve the cornering limit of the vehicle, a study to find the balance between the steering systems, the stability of the vehicle, higher lateral acceleration and the cornering limit detection must be conducted. The aim of this research is to study and develop a new suspension system that that will boost the lateral acceleration of the vehicle and ultimately improving the cornering limit of the vehicle. This research will also study environmental factor and the stability factor of the new suspension system. The double wishbone suspension system is widely used in four-wheel vehicle especially for high cornering performance sports car and racing car. The double wishbone designs allow the engineer to carefully control the motion of the wheel by controlling such parameters as camber angle, caster angle, toe pattern, roll center height, scrub radius, scuff and more. The development of the new suspension system will focus on the ability of the new suspension system to optimize the camber control and to improve the camber limit during a cornering motion. The research will be carried out using the CAE analysis tool. Using this analysis tool we will develop a JSAE Formula Machine equipped with the double wishbone system and also the new suspension system and conduct simulation and conduct studies on performance of both suspension systems.

Keywords: automobile, camber thrust, cornering force, suspension

Procedia PDF Downloads 295
314 Development and Structural Characterization of a Snack Food with Added Type 4 Extruded Resistant Starch

Authors: Alberto A. Escobar Puentes, G. Adriana García, Luis F. Cuevas G., Alejandro P. Zepeda, Fernando B. Martínez, Susana A. Rincón

Abstract:

Snack foods are usually classified as ‘junk food’ because have little nutritional value. However, due to the increase on the demand and third generation (3G) snacks market, low price and easy to prepare, can be considered as carriers of compounds with certain nutritional value. Resistant starch (RS) is classified as a prebiotic fiber it helps to control metabolic problems and has anti-cancer colon properties. The active compound can be developed by chemical cross-linking of starch with phosphate salts to obtain a type 4 resistant starch (RS4). The chemical reaction can be achieved by extrusion, a process widely used to produce snack foods, since it's versatile and a low-cost procedure. Starch is the major ingredient for snacks 3G manufacture, and the seeds of sorghum contain high levels of starch (70%), the most drought-tolerant gluten-free cereal. Due to this, the aim of this research was to develop a snack (3G), with RS4 in optimal conditions extrusion (previously determined) from sorghum starch, and carry on a sensory, chemically and structural characterization. A sample (200 g) of sorghum starch was conditioned with 4% sodium trimetaphosphate/ sodium tripolyphosphate (99:1) and set to 28.5% of moisture content. Then, the sample was processed in a single screw extruder equipped with rectangular die. The inlet, transport and output temperatures were 60°C, 134°C and 70°C, respectively. The resulting pellets were expanded in a microwave oven. The expansion index (EI), penetration force (PF) and sensory analysis were evaluated in the expanded pellets. The pellets were milled to obtain flour and RS content, degree of substitution (DS), and percentage of phosphorus (% P) were measured. Spectroscopy [Fourier Transform Infrared (FTIR)], X-ray diffraction, differential scanning calorimetry (DSC) and scanning electron microscopy (SEM) analysis were performed in order to determine structural changes after the process. The results in 3G were as follows: RS, 17.14 ± 0.29%; EI, 5.66 ± 0.35 and PF, 5.73 ± 0.15 (N). Groups of phosphate were identified in the starch molecule by FTIR: DS, 0.024 ± 0.003 and %P, 0.35±0.15 [values permitted as food additives (<4 %P)]. In this work an increase of the gelatinization temperature after the crosslinking of starch was detected; the loss of granular and vapor bubbles after expansion were observed by SEM; By using X-ray diffraction, loss of crystallinity was observed after extrusion process. Finally, a snack (3G) was obtained with RS4 developed by extrusion technology. The sorghum starch was efficient for snack 3G production.

Keywords: extrusion, resistant starch, snack (3G), Sorghum

Procedia PDF Downloads 283
313 Study of Biofouling Wastewater Treatment Technology

Authors: Sangho Park, Mansoo Kim, Kyujung Chae, Junhyuk Yang

Abstract:

The International Maritime Organization (IMO) recognized the problem of invasive species invasion and adopted the "International Convention for the Control and Management of Ships' Ballast Water and Sediments" in 2004, which came into force on September 8, 2017. In 2011, the IMO approved the "Guidelines for the Control and Management of Ships' Biofouling to Minimize the Transfer of Invasive Aquatic Species" to minimize the movement of invasive species by hull-attached organisms and required ships to manage the organisms attached to their hulls. Invasive species enter new environments through ships' ballast water and hull attachment. However, several obstacles to implementing these guidelines have been identified, including a lack of underwater cleaning equipment, regulations on underwater cleaning activities in ports, and difficulty accessing crevices in underwater areas. The shipping industry, which is the party responsible for understanding these guidelines, wants to implement them for fuel cost savings resulting from the removal of organisms attached to the hull, but they anticipate significant difficulties in implementing the guidelines due to the obstacles mentioned above. Robots or people remove the organisms attached to the hull underwater, and the resulting wastewater includes various species of organisms and particles of paint and other pollutants. Currently, there is no technology available to sterilize the organisms in the wastewater or stabilize the heavy metals in the paint particles. In this study, we aim to analyze the characteristics of the wastewater generated from the removal of hull-attached organisms and select the optimal treatment technology. The organisms in the wastewater generated from the removal of the attached organisms meet the biological treatment standard (D-2) using the sterilization technology applied in the ships' ballast water treatment system. The heavy metals and other pollutants in the paint particles generated during removal are treated using stabilization technologies such as thermal decomposition. The wastewater generated is treated using a two-step process: 1) development of sterilization technology through pretreatment filtration equipment and electrolytic sterilization treatment and 2) development of technology for removing particle pollutants such as heavy metals and dissolved inorganic substances. Through this study, we will develop a biological removal technology and an environmentally friendly processing system for the waste generated after removal that meets the requirements of the government and the shipping industry and lays the groundwork for future treatment standards.

Keywords: biofouling, ballast water treatment system, filtration, sterilization, wastewater

Procedia PDF Downloads 84
312 Progressive Damage Analysis of Mechanically Connected Composites

Authors: Şeyma Saliha Fidan, Ozgur Serin, Ata Mugan

Abstract:

While performing verification analyses under static and dynamic loads that composite structures used in aviation are exposed to, it is necessary to obtain the bearing strength limit value for mechanically connected composite structures. For this purpose, various tests are carried out in accordance with aviation standards. There are many companies in the world that perform these tests in accordance with aviation standards, but the test costs are very high. In addition, due to the necessity of producing coupons, the high cost of coupon materials, and the long test times, it is necessary to simulate these tests on the computer. For this purpose, various test coupons were produced by using reinforcement and alignment angles of the composite radomes, which were integrated into the aircraft. Glass fiber reinforced and Quartz prepreg is used in the production of the coupons. The simulations of the tests performed according to the American Society for Testing and Materials (ASTM) D5961 Procedure C standard were performed on the computer. The analysis model was created in three dimensions for the purpose of modeling the bolt-hole contact surface realistically and obtaining the exact bearing strength value. The finite element model was carried out with the Analysis System (ANSYS). Since a physical break cannot be made in the analysis studies carried out in the virtual environment, a hypothetical break is realized by reducing the material properties. The material properties reduction coefficient was determined as 10%, which is stated to give the most realistic approach in the literature. There are various theories in this method, which is called progressive failure analysis. Because the hashin theory does not match our experimental results, the puck progressive damage method was used in all coupon analyses. When the experimental and numerical results are compared, the initial damage and the resulting force drop points, the maximum damage load values ​​, and the bearing strength value are very close. Furthermore, low error rates and similar damage patterns were obtained in both test and simulation models. In addition, the effects of various parameters such as pre-stress, use of bushing, the ratio of the distance between the bolt hole center and the plate edge to the hole diameter (E/D), the ratio of plate width to hole diameter (W/D), hot-wet environment conditions were investigated on the bearing strength of the composite structure.

Keywords: puck, finite element, bolted joint, composite

Procedia PDF Downloads 74
311 Risks beyond Cyber in IoT Infrastructure and Services

Authors: Mattias Bergstrom

Abstract:

Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.

Keywords: IoT, security, infrastructure, SCADA, blockchain, AI

Procedia PDF Downloads 74
310 Reading Strategies of Generation X and Y: A Survey on Learners' Skills and Preferences

Authors: Kateriina Rannula, Elle Sõrmus, Siret Piirsalu

Abstract:

Mixed generation classroom is a phenomenon that current higher education establishments are faced with daily trying to meet the needs of modern labor market with its emphasis on lifelong learning and retraining. Representatives of mainly X and Y generations in one classroom acquiring higher education is a challenge to lecturers considering all the characteristics that differ one generation from another. The importance of outlining different strategies and considering the needs of the students lies in the necessity for everyone to acquire the maximum of the provided knowledge as well as to understand each other to study together in one classroom and successfully cooperate in future workplaces. In addition to different generations, there are also learners with different native languages which have an impact on reading and understanding texts in third languages, including possible translation. Current research aims to investigate, describe and compare reading strategies among the representatives of generation X and Y. Hypotheses were formulated - representatives of generation X and Y use different reading strategies which is also different among first and third year students of the before mentioned generations. Current study is an empirical, qualitative study. To achieve the aim of the research, relevant literature was analyzed and a semi-structured questionnaire conducted among the first and third year students of Tallinn Health Care College. Questionnaire consisted of 25 statements on the text reading strategies, 3 multiple choice questions on preferences considering the design and medium of the text, and three open questions on the translation process when working with a text in student’s third language. The results of the questionnaire were categorized, analyzed and compared. Both, generation X and Y described their reading strategies to be 'scanning' and 'surfing'. Compared to generation X, first year generation Y learners valued interactivity and nonlinear texts. Students frequently used strategies of skimming, scanning, translating and highlighting together with relevant-thinking and assistance-seeking. Meanwhile, the third-year generation Y students no longer frequently used translating, resourcing and highlighting while Generation X learners still incorporated these strategies. Knowing about different needs of the generations currently inside the classrooms and on the labor market enables us with tools to provide sustainable education and grants the society a work force that is more flexible and able to move between professions. Future research should be conducted in order to investigate the amount of learning and strategy- adoption between generations. As for reading, main suggestions arising from the research are as follows: make a variety of materials available to students; allow them to select what they want to read and try to make those materials visually attractive, relevant, and appropriately challenging for learners considering the differences of generations.

Keywords: generation X, generation Y, learning strategies, reading strategies

Procedia PDF Downloads 161
309 Improving Photocatalytic Efficiency of TiO2 Films Incorporated with Natural Geopolymer for Sunlight-Driven Water Purification

Authors: Satam Alotibi, Haya A. Al-Sunaidi, Almaymunah M. AlRoibah, Zahraa H. Al-Omaran, Mohammed Alyami, Fatehia S. Alhakami, Abdellah Kaiba, Mazen Alshaaer, Talal F. Qahtan

Abstract:

This research study presents a novel approach to harnessing the potential of natural geopolymer in conjunction with TiO₂ nanoparticles (TiO₂ NPs) for the development of highly efficient photocatalytic materials for water decontamination. The study begins with the formulation of a geopolymer paste derived from natural sources, which is subsequently applied as a coating on glass substrates and allowed to air-dry at room temperature. The result is a series of geopolymer-coated glass films, serving as the foundation for further experimentation. To enhance the photocatalytic capabilities of these films, a critical step involves immersing them in a suspension of TiO₂ nanoparticles (TiO₂ NPs) in water for varying durations. This immersion process yields geopolymer-loaded TiO₂ NPs films with varying concentrations, setting the stage for comprehensive characterization and analysis. A range of advanced analytical techniques, including UV-Vis spectroscopy, Fourier-transform infrared spectroscopy (FTIR), Raman spectroscopy, scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), and atomic force microscopy (AFM), were meticulously employed to assess the structural, morphological, and chemical properties of the geopolymer-based TiO₂ films. These analyses provided invaluable insights into the materials' composition and surface characteristics. The culmination of this research effort sees the geopolymer-based TiO₂ films being repurposed as immobilized photocatalytic reactors for water decontamination under natural sunlight irradiation. Remarkably, the results revealed exceptional photocatalytic performance that exceeded the capabilities of conventional TiO₂-based photocatalysts. This breakthrough underscores the significant potential of natural geopolymer as a versatile and highly effective matrix for enhancing the photocatalytic efficiency of TiO₂ nanoparticles in water treatment applications. In summary, this study represents a significant advancement in the quest for sustainable and efficient photocatalytic materials for environmental remediation. By harnessing the synergistic effects of natural geopolymer and TiO₂ nanoparticles, these geopolymer-based films exhibit outstanding promise in addressing water decontamination challenges and contribute to the development of eco-friendly solutions for a cleaner and healthier environment.

Keywords: geopolymer, TiO2 nanoparticles, photocatalytic materials, water decontamination, sustainable remediation

Procedia PDF Downloads 37
308 Nonlinear Interaction of Free Surface Sloshing of Gaussian Hump with Its Container

Authors: Mohammad R. Jalali

Abstract:

Movement of liquid with a free surface in a container is known as slosh. For instance, slosh occurs when water in a closed tank is set in motion by a free surface displacement, or when liquid natural gas in a container is vibrated by an external driving force, such as an earthquake or movement induced by transport. Slosh is also derived from resonant switching of a natural basin. During sloshing, different types of motion are produced by energy exchange between the liquid and its container. In present study, a numerical model is developed to simulate the nonlinear even harmonic oscillations of free surface sloshing of an initial disturbance to the free surface of a liquid in a closed square basin. The response of the liquid free surface is affected by amplitude and motion frequencies of its container; therefore, sloshing involves complex fluid-structure interactions. In the present study, nonlinear interaction of free surface sloshing of an initial Gaussian hump with its uneven container is predicted numerically. For this purpose, Green-Naghdi (GN) equations are applied as governing equation of fluid field to produce nonlinear second-order and higher-order wave interactions. These equations reduce the dimensions from three to two, yielding equations that can be solved efficiently. The GN approach assumes a particular flow kinematic structure in the vertical direction for shallow and deep-water problems. The fluid velocity profile is finite sum of coefficients depending on space and time multiplied by a weighting function. It should be noted that in GN theory, the flow is rotational. In this study, GN numerical simulations of initial Gaussian hump are compared with Fourier series semi-analytical solutions of the linearized shallow water equations. The comparison reveals that satisfactory agreement exists between the numerical simulation and the analytical solution of the overall free surface sloshing patterns. The resonant free surface motions driven by an initial Gaussian disturbance are obtained by Fast Fourier Transform (FFT) of the free surface elevation time history components. Numerically predicted velocity vectors and magnitude contours for the free surface patterns indicate that interaction of Gaussian hump with its container has localized effect. The result of this sloshing is applicable to the design of stable liquefied oil containers in tankers and offshore platforms.

Keywords: fluid-structure interactions, free surface sloshing, Gaussian hump, Green-Naghdi equations, numerical predictions

Procedia PDF Downloads 374
307 Synthesis of Carbon Nanotubes from Coconut Oil and Fabrication of a Non Enzymatic Cholesterol Biosensor

Authors: Mitali Saha, Soma Das

Abstract:

The fabrication of nanoscale materials for use in chemical sensing, biosensing and biological analyses has proven a promising avenue in the last few years. Cholesterol has aroused considerable interest in recent years on account of its being an important parameter in clinical diagnosis. There is a strong positive correlation between high serum cholesterol level and arteriosclerosis, hypertension, and myocardial infarction. Enzyme-based electrochemical biosensors have shown high selectivity and excellent sensitivity, but the enzyme is easily denatured during its immobilization procedure and its activity is also affected by temperature, pH, and toxic chemicals. Besides, the reproducibility of enzyme-based sensors is not very good which further restrict the application of cholesterol biosensor. It has been demonstrated that carbon nanotubes could promote electron transfer with various redox active proteins, ranging from cytochrome c to glucose oxidase with a deeply embedded redox center. In continuation of our earlier work on the synthesis and applications of carbon and metal based nanoparticles, we have reported here the synthesis of carbon nanotubes (CCNT) by burning coconut oil under insufficient flow of air using an oil lamp. The soot was collected from the top portion of the flame, where the temperature was around 6500C which was purified, functionalized and then characterized by SEM, p-XRD and Raman spectroscopy. The SEM micrographs showed the formation of tubular structure of CCNT having diameter below 100 nm. The XRD pattern indicated the presence of two predominant peaks at 25.20 and 43.80, which corresponded to (002) and (100) planes of CCNT respectively. The Raman spectrum (514 nm excitation) showed the presence of 1600 cm-1 (G-band) related to the vibration of sp2-bonded carbon and at 1350 cm-1 (D-band) responsible for the vibrations of sp3-bonded carbon. A nonenzymatic cholesterol biosensor was then fabricated on an insulating Teflon material containing three silver wires at the surface, covered by CCNT, obtained from coconut oil. Here, CCNTs worked as working as well as counter electrodes whereas reference electrode and electric contacts were made of silver. The dimensions of the electrode was 3.5 cm×1.0 cm×0.5 cm (length× width × height) and it is ideal for working with 50 µL volume like the standard screen printed electrodes. The voltammetric behavior of cholesterol at CCNT electrode was investigated by cyclic voltammeter and differential pulse voltammeter using 0.001 M H2SO4 as electrolyte. The influence of the experimental parameters on the peak currents of cholesterol like pH, accumulation time, and scan rates were optimized. Under optimum conditions, the peak current was found to be linear in the cholesterol concentration range from 1 µM to 50 µM with a sensitivity of ~15.31 μAμM−1cm−2 with lower detection limit of 0.017 µM and response time of about 6s. The long-term storage stability of the sensor was tested for 30 days and the current response was found to be ~85% of its initial response after 30 days.

Keywords: coconut oil, CCNT, cholesterol, biosensor

Procedia PDF Downloads 259
306 Data Calibration of the Actual versus the Theoretical Micro Electro Mechanical Systems (MEMS) Based Accelerometer Reading through Remote Monitoring of Padre Jacinto Zamora Flyover

Authors: John Mark Payawal, Francis Aldrine Uy, John Paul Carreon

Abstract:

This paper shows the application of Structural Health Monitoring, SHM into bridges. Bridges are structures built to provide passage over a physical obstruction such as rivers, chasms or roads. The Philippines has a total of 8,166 national bridges as published on the 2015 atlas of the Department of Public Works and Highways (DPWH) and only 2,924 or 35.81% of these bridges are in good condition. As a result, PHP 30.464 billion of the 2016 budget of DPWH is allocated on roads and/or bridges maintenance alone. Intensive spending is owed to the present practice of outdated manual inspection and assessment, and poor structural health monitoring of Philippine infrastructures. As the School of Civil, Environmental, & Geological Engineering of Mapua Institute of Technology (MIT) continuous its well driven passion in research based projects, a partnership with the Department of Science and Technology (DOST) and the DPWH launched the application of Structural Health Monitoring, (SHM) in Padre Jacinto Zamora Flyover. The flyover is located along Nagtahan Boulevard in Sta. Mesa, Manila that connects Brgy. 411 and Brgy. 635. It gives service to vehicles going from Lacson Avenue to Mabini Bridge passing over Legarda Flyover. The flyover is chosen among the many located bridges in Metro Manila as the focus of the pilot testing due to its site accessibility, and complete structural built plans and specifications necessary for SHM as provided by the Bureau of Design, BOD department of DPWH. This paper focuses on providing a method to calibrate theoretical readings from STAAD Vi8 Pro and sync the data to actual MEMS accelerometer readings. It is observed that while the design standards used in constructing the flyover was reflected on the model, actual readings of MEMS accelerometer display a large difference compared to the theoretical data ran and taken from STAAD Vi8 Pro. In achieving a true seismic response of the modeled bridge or hence syncing the theoretical data to the actual sensor reading also called as the independent variable of this paper, analysis using single degree of freedom (SDOF) of the flyover under free vibration without damping using STAAD Vi8 Pro is done. The earthquake excitation and bridge responses are subjected to earthquake ground motion in the form of ground acceleration or Peak Ground Acceleration, PGA. Translational acceleration load is used to simulate the ground motion of the time history analysis acceleration record in STAAD Vi8 Pro.

Keywords: accelerometer, analysis using single degree of freedom, micro electro mechanical system, peak ground acceleration, structural health monitoring

Procedia PDF Downloads 292
305 Ethical Framework in Organ Transplantation and the Priority Line between Law and Life

Authors: Abel Sichinava

Abstract:

The need for organ transplantation is vigorously increasing worldwide. The numbers on the waiting lists grow, but the number of donors is not keeping up with the demand even though there is a legal possibility of decreasing the gap between the demand and supply. Most countries around the globe are facing an organ donation problem (living or deceased); however, the extent of the problem differs based on how well developed a country is. The determining issues seem to be centered on how aware the society is about the concept of organ donation, as well as cultural and religious factors. Even if people are aware of the benefits of organ donation, they may still have fears that keep them from being in complete agreement with the idea. Some believe that in the case of deceased organ donation: “the brain dead human body may recover from its injuries” or “the sick might get less appropriate treatment if doctors know they are potential donors.” In the case of living organ donations, people sometimes fear that after the donation, “it might reduce work efficiency, cause health deterioration or even death.” Another major obstacle in the organ shortage is a lack of a well developed ethical framework. In reality, there are truly an immense number of people on the waiting list, and they have only two options in order to receive a suitable organ. First is the legal way, which is to wait until their turn. Sadly, numerous patients die while on the waiting list before an appropriate organ becomes available for transplant. The second option is an illegal way: seeking an organ in a country where they can possibly get. To tell the truth, in people’s desire to live, they may choose the second option if their resources are sufficient. This process automatically involves “organ brokers.” These are people who get organs from vulnerable poor people by force or betrayal. As mentioned earlier, the high demand and low supply leads to human trafficking. The subject of the study was the large number of society from different backgrounds of their belief, culture, nationality, level of education, socio-economic status. The great majority of them interviewed online used “Google Drive Survey” and others in person. All statistics and information gathered from trusted sources annotated in the reference list and above mentioned considerable testimonies shared by the respondents are the fundamental evidence of a lack of the well developed ethical framework. In conclusion, the continuously increasing number of people on the waiting list and an irrelevant ethical framework, lead people to commit to atrocious, dehumanizing crimes. Therefore, world society should be equally obligated to think carefully and make vital decisions together for the advancement of an organ donations and its ethical framework.

Keywords: donation, ethical framwork, organ, transplant

Procedia PDF Downloads 122
304 Learning in Multicultural Workspaces: A Case of Aged Care

Authors: Robert John Godby

Abstract:

To be responsive now and in the future, workplaces must address the demands of multicultural teams as they become more common elements of the global labor force. This is especially the case for aged care due to the aging population, industry growth and migrant recruitment. This research identifies influences on and improvements for learning in these environments. Its unique contribution is to illuminate how culturally diverse workplaces can work and learn together more effectively. A mixed-methods approach was used to gather data about this topic in two phases. Firstly, the research methods included a survey of 102 aged care workers around Australia from two multi-site aged care organisations. The questionnaire elicited both quantitative and qualitative data about worker characteristics and perspectives on working and learning in aged care. Secondly, a case study of one aged care worksite was formulated drawing on worksite information and interviews with workers. A review of the literature suggests that learning in multicultural work environments is influenced by three main factors: 1) the individual workers themselves, 2) their interaction with each other and 3) the environment in which they work. There are various accounts of these three factors, how they are manifested and how they lead to a change in workers’ disposition, knowledge, or expertise when confronted with new circumstances. The study has found that a key individual factor influencing learning is cultural background. Their unique view of the world was shown to affect their approach to both their work and co-working. Interactional factors suggest that the high requirement for collaboration in aged care positively supports learning in this context; however, it can be hindered by cultural bias and spoken accent. The study also found that environmental factors, such as disruptions caused by the pandemic, were another key influence. For example, the need to wear face masks hindered the communication needed for workplace learning. This was especially challenging due to the diverse language backgrounds and abilities within the teams. Potential improvements for learning in multicultural aged care work environments were identified. These include more frequent and structured inter-peer learning (e.g. buddying), communication training (e.g. English language usage for both native and non-native speaking workers) and support for cross-cultural habitude (e.g. recognizing and adapting to cultural differences). Workplace learning in cross-cultural aged care environments is an area that is not extensively dealt with in the literature. This study addresses this gap and holds the potential to contribute practical insights to aged care and other diverse industries.

Keywords: cross-cultural learning, learning in aged care, migrant learning, workplace learning

Procedia PDF Downloads 128
303 Barrier Analysis of Sustainable Development of Small Towns: A Perspective of Southwest China

Authors: Yitian Ren, Liyin Shen, Tao Zhou, Xiao Li

Abstract:

The past urbanization process in China has brought out series of problems, the Chinese government has then positioned small towns in essential roles for implementing the strategy 'The National New-type Urbanization Plan (2014-2020)'. As the connector and transfer station of cities and countryside, small towns are important force to narrow the gap between urban and rural area, and to achieve the mission of new-type urbanization in China. The sustainable development of small towns plays crucial role because cities are not capable enough to absorb the surplus rural population. Nevertheless, there are various types of barriers hindering the sustainable development of small towns, which led to the limited development of small towns and has presented a bottleneck in Chinese urbanization process. Therefore, this paper makes deep understanding of these barriers, thus effective actions can be taken to address them. And this paper chooses the perspective of Southwest China (refers to Sichuan province, Yunnan province, Guizhou province, Chongqing Municipality City and Tibet Autonomous Region), cause the urbanization rate in Southwest China is far behind the average urbanization level of the nation and the number of small towns accounts for a great proportion in mainland China, also the characteristics of small towns in Southwest China are distinct. This paper investigates the barriers of sustainable development of small towns which located in Southwest China by using the content analysis method, combing with the field work and interviews in sample small towns, then identified and concludes 18 barriers into four dimensions, namely, institutional barriers, economic barriers, social barriers and ecological barriers. Based on the research above, questionnaire survey and data analysis are implemented, thus the key barriers hinder the sustainable development of small towns in Southwest China are identified by using fuzzy set theory, those barriers are, lack of independent financial power, lack of construction land index, financial channels limitation, single industrial structure, topography variety and complexity, which mainly belongs to institutional barriers and economic barriers. In conclusion part, policy suggestions are come up with to improve the politic and institutional environment of small town development, also the market mechanism are supposed to be introduced to the development process of small towns, which can effectively overcome the economic barriers, promote the sustainable development of small towns, accelerate the in-situ urbanization by absorbing peasants in nearby villages, and achieve the mission of new-type urbanization in China from the perspective of people-oriented.

Keywords: barrier analysis, sustainable development, small town, Southwest China

Procedia PDF Downloads 319
302 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review

Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni

Abstract:

Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.

Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing

Procedia PDF Downloads 46
301 Adverse Childhood Experience of Domestic Violence and Domestic Mental Health Leading to Youth Violence: An Analysis of Selected Boroughs in London

Authors: Sandra Smart-Akande, Chaminda Hewage, Imtiaz Khan, Thanuja Mallikarachchi

Abstract:

According to UK police-recorded data, there has been a substantial increase in knife-related crime and youth violence in the UK since 2014 particularly in the London boroughs. These crime rates are disproportionally distributed across London with the majority of these crimes occurring in the highly deprived areas of London and among young people aged 11 to 24 with large discrepancies across ethnicity, age, gender and borough of residence. Comprehensive studies and literature have identified risk factors associated with a knife carrying among youth to be Adverse Childhood Experience (ACEs), poor mental health, school or social exclusion, drug dealing, drug using, victim of violent crime, bullying, peer pressure or gang involvement, just to mention a few. ACEs are potentially traumatic events that occur in childhood, this can be experiences or stressful events in the early life of a child and can lead to an increased risk of damaging health or social outcomes in the latter life of the individual. Research has shown that children or youths involved in youth violence have had childhood experience characterised by disproportionate adverse childhood experiences and substantial literature link ACEs to be associated with criminal or delinquent behavior. ACEs are commonly grouped by researchers into: Abuse (Physical, Verbal, Sexual), Neglect (Physical, Emotional) and Household adversities (Mental Illness, Incarcerated relative, Domestic violence, Parental Separation or Bereavement). To the author's best knowledge, no study to date has investigated how household mental health (mental health of a parent or mental health of a child) and domestic violence (domestic violence on a parent or domestic violence on a child) is related to knife homicides across the local authorities areas of London. This study seeks to address the gap by examining a large sample of data from the London Metropolitan Police Force and Characteristics of Children in Need data from the UK Department for Education. The aim of this review is to identify and synthesise evidence from data and a range of literature to identify the relationship between adverse childhood experiences and youth violence in the UK. Understanding the link between ACEs and future outcomes can support preventative action.

Keywords: adverse childhood experiences, domestic violence, mental health, youth violence, prediction analysis, London knife crime

Procedia PDF Downloads 94
300 Assessing Professionalism, Communication, and Collaboration among Emergency Physicians by Implementing a 360-Degree Evaluation

Authors: Ahmed Al Ansari, Khalid Al Khalifa

Abstract:

Objective: Multisource feedback (MSF), also called the 360-Degree evaluation is an evaluation process by which questionnaires are distributed amongst medical peers and colleagues to assess physician performance from different sources other than the attending or the supervising physicians. The aim of this study was to design, implement, and evaluate a 360-Degree process in assessing emergency physicians trainee in the Kingdom of Bahrain. Method: The study was undertaken in Bahrain Defense Force Hospital which is a military teaching hospital in the Kingdom of Bahrain. Thirty emergency physicians (who represent the total population of the emergency physicians in our hospital) were assessed in this study. We developed an instrument modified from the Physician achievement review instrument PAR which was used to assess Physician in Alberta. We focused in our instrument to assess professionalism, communication skills and collaboration only. To achieve face and content validity, table of specification was constructed and a working group was involved in constructing the instrument. Expert opinion was considered as well. The instrument consisted of 39 items; were 15 items to assess professionalism, 13 items to assess communication skills, and 11 items to assess collaboration. Each emergency physicians was evaluated with 3 groups of raters, 4 Medical colleague emergency physicians, 4 medical colleague who are considered referral physicians from different departments, and 4 Coworkers from the emergency department. Independent administrative team was formed to carry on the responsibility of distributing the instruments and collecting them in closed envelopes. Each envelope was consisted of that instrument and a guide for the implementation of the MSF and the purpose of the study. Results: A total of 30 emergency physicians 16 males and 14 females who represent the total number of the emergency physicians in our hospital were assessed. The total collected forms is 269, were 105 surveys from coworkers working in emergency department, 93 surveys from medical colleague emergency physicians, and 116 surveys from referral physicians from different departments. The total mean response rates were 71.2%. The whole instrument was found to be suitable for factor analysis (KMO = 0.967; Bartlett test significant, p<0.00). Factor analysis showed that the data on the questionnaire decomposed into three factors which counted for 72.6% of the total variance: professionalism, collaboration, and communication. Reliability analysis indicated that the instrument full scale had high internal consistency (Cronbach’s α 0.98). The generalizability coefficients (Ep2) were 0.71 for the surveys. Conclusions: Based on the present results, the current instruments and procedures have high reliability, validity, and feasibility in assessing emergency physicians trainee in the emergency room.

Keywords: MSF system, emergency, validity, generalizability

Procedia PDF Downloads 334
299 Patient Agitation and Violence in Medical-Surgical Settings at BronxCare Hospital, Before and During COVID-19 Pandemic; A Retrospective Chart Review

Authors: Soroush Pakniyat-Jahromi, Jessica Bucciarelli, Souparno Mitra, Neda Motamedi, Ralph Amazan, Samuel Rothman, Jose Tiburcio, Douglas Reich, Vicente Liz

Abstract:

Violence is defined as an act of physical force that is intended to cause harm and may lead to physical and/or psychological damage. Violence toward healthcare workers (HCWs) is more common in psychiatric settings, emergency departments, and nursing homes; however, healthcare workers in medical setting are not spared from such events. Workplace violence has a huge burden in the healthcare industry and has a major impact on the physical and mental wellbeing of staff. The purpose of this study is to compare the prevalence of patient agitation and violence in medical-surgical settings in BronxCare Hospital (BCH) Bronx, New York, one year before and during the COVID-19 pandemic. Data collection occurred between June 2021 and August 2021, while the sampling time was from 2019 to 2021. The data were separated into two separate time categories: pre-COVID-19 (03/2019-03/2020) and COVID-19 (03/2020-03/2021). We created frequency tables for 19 variables. We used a chi-square test to determine a variable's statistical significance. We tested all variables against “restraint type”, determining if a patient was violent or became violent enough to restrain. The restraint types were “chemical”, “physical”, or both. This analysis was also used to determine if there was a statistical difference between the pre-COVID-19 and COVID-19 timeframes. Our data shows that there was an increase in incidents of violence in COVID-19 era (03/2020-03/2021), with total of 194 (62.8%) reported events, compared to pre COVID-19 era (03/2019-03/2020) with 115 (37.2%) events (p: 0.01). Our final analysis, completed using a chi-square test, determined the difference in violence in patients between pre-COVID-19 and COVID-19 era. We then tested the violence marker against restraint type. The result was statistically significant (p: 0.01). This is the first paper to systematically review the prevalence of violence in medical-surgical units in a hospital in New York, pre COVID-19 and during the COVID-19 era. Our data is in line with the global trend of increased prevalence of patient agitation and violence in medical settings during the COVID-19 pandemic. Violence and its management is a challenge in healthcare settings, and the COVID-19 pandemic has brought to bear a complexity of circumstances, which may have increased its incidence. It is important to identify and teach healthcare workers the best preventive approaches in dealing with patient agitation, to decrease the number of restraints in medical settings, and to create a less restrictive environment to deliver care.

Keywords: COVID-19 pandemic, patient agitation, restraints, violence

Procedia PDF Downloads 113
298 Platform Virtual for Joint Amplitude Measurement Based in MEMS

Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez

Abstract:

Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.

Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation

Procedia PDF Downloads 233