Search results for: speed bump
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2942

Search results for: speed bump

212 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil

Authors: Ana Julia C. Kfouri

Abstract:

A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.

Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort

Procedia PDF Downloads 388
211 A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages

Authors: Olivia A. Wilson, Hannah E. Power, Murray Kendall

Abstract:

Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.

Keywords: emergency management, sydney, tide-tsunami interaction, tsunami impact

Procedia PDF Downloads 242
210 In Vivo Evaluation of Exposure to Electromagnetic Fields at 27 GHz (5G) of Danio Rerio: A Preliminary Study

Authors: Elena Maria Scalisi, Roberta Pecoraro, Martina Contino, Sara Ignoto, Carmelo Iaria, Santi Concetto Pavone, Gino Sorbello, Loreto Di Donato, Maria Violetta Brundo

Abstract:

5G Technology is evolving to satisfy a variety of service requirements that may allow high data-rate connections (1Gbps) and lower latency times than current (<1ms). In order to support a high data transmission speed and a high traffic service for eMBB (enhanced mobile broadband) use cases, 5G systems have the characteristic of using different frequency bands of the radio wave spectrum (700 MHz, 3.6-3.8 GHz and 26.5-27.5 GHz), thus taking advantage of higher frequencies than previous mobile radio generations (1G-4G). However, waves at higher frequencies have a lower capacity to propagate in free space and therefore, in order to guarantee the capillary coverage of the territory for high reliability applications, it will be necessary to install a large number of repeaters. Following the introduction of this new technology, there has been growing concern over the past few months about possible harmful effects on human health. The aim of this preliminary study is to evaluate possible short term effects induced by 5G-millimeter waves on embryonic development and early life stages of Danio rerio by Z-FET. We exposed developing zebrafish at frequency of 27 GHz, with a standard pyramidal horn antenna placed at 15 cm far from the samples holder ensuring an incident power density of 10 mW/cm2. During the exposure cycle, from 6 h post fertilization (hpf) to 96 hpf, we measured a different morphological endpoints every 24 hours. Zebrafish embryo toxicity test (Z-FET) is a short term test, carried out on fertilized eggs of zebrafish and it represents an effective alternative to acute test with adult fish (OECD, 2013). We have observed that 5G did not reveal significant impacts on mortality nor on morphology because exposed larvae showed a normal detachment of the tail, presence of heartbeat, well-organized somites, therefore hatching rate was lower than untreated larvae even at 48 h of exposure. Moreover, the immunohistochemical analysis performed on larvae showed a negativity to the HSP-70 expression used as a biomarkers. This is a preliminary study on evaluation of potential toxicity induced by 5G and it seems appropriate to underline the importance that further studies would take, aimed at clarifying the probable real risk of exposure to electromagnetic fields.

Keywords: Biomarker of exposure, embryonic development, 5G waves, zebrafish embryo toxicity test

Procedia PDF Downloads 131
209 A Comparison of Proxemics and Postural Head Movements during Pop Music versus Matched Music Videos

Authors: Harry J. Witchel, James Ackah, Carlos P. Santos, Nachiappan Chockalingam, Carina E. I. Westling

Abstract:

Introduction: Proxemics is the study of how people perceive and use space. It is commonly proposed that when people like or engage with a person/object, they will move slightly closer to it, often quite subtly and subconsciously. Music videos are known to add entertainment value to a pop song. Our hypothesis was that by adding appropriately matched video to a pop song, it would lead to a net approach of the head to the monitor screen compared to simply listening to an audio-only version of the song. Methods: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5 x 27 cm monitor two musical stimuli in a counterbalanced order; all stimuli were based on music videos by the band OK Go: Here It Goes Again (HIGA, boredom ratings (0-100) = 15.00 ± 4.76, mean ± SEM, standard-error-of-the-mean) and Do What You Want (DWYW, boredom ratings = 23.93 ± 5.98), which did not differ in boredom elicited (P = 0.21, rank-sum test). Each participant experienced each song only once, and one song (counterbalanced) as audio-only versus the other song as a music video. The movement was measured by video-tracking using Kinovea 0.8, based on recording from a lateral aspect; before beginning, each participant had a reflective motion tracking marker placed on the outer canthus of the left eye. Analysis of the Kinovea X-Y coordinate output in comma-separated-variables format was performed in Matlab, as were non-parametric statistical tests. Results: We found that the audio-only stimuli (combined for both HIGA and DWYW, mean ± SEM, 35.71 ± 5.36) were significantly more boring than the music video versions (19.46 ± 3.83, P = 0.0066 Wilcoxon Signed Rank Test (WSRT), Cohen's d = 0.658, N = 28). We also found that participants' heads moved around twice as much during the audio-only versions (speed = 0.590 ± 0.095 mm/sec) compared to the video versions (0.301 ± 0.063 mm/sec, P = 0.00077, WSRT). However, the participants' mean head-to-screen distances were not detectably smaller (i.e. head closer to the screen) during the music videos (74.4 ± 1.8 cm) compared to the audio-only stimuli (73.9 ± 1.8 cm, P = 0.37, WSRT). If anything, during the audio-only condition, they were slightly closer. Interestingly, the ranges of the head-to-screen distances were smaller during the music video (8.6 ± 1.4 cm) compared to the audio-only (12.9 ± 1.7 cm, P = 0.0057, WSRT), the standard deviations were also smaller (P = 0.0027, WSRT), and their heads were held 7 mm higher (video 116.1 ± 0.8 vs. audio-only 116.8 ± 0.8 cm above floor, P = 0.049, WSRT). Discussion: As predicted, sitting and listening to experimenter-selected pop music was more boring than when the music was accompanied by a matched, professionally-made video. However, we did not find that the proxemics of the situation led to approaching the screen. Instead, adding video led to efforts to control the head to a more central and upright viewing position and to suppress head fidgeting.

Keywords: boredom, engagement, music videos, posture, proxemics

Procedia PDF Downloads 167
208 AAV-Mediated Human Α-Synuclein Expression in a Rat Model of Parkinson's Disease –Further Characterization of PD Phenotype, Fine Motor Functional Effects as Well as Neurochemical and Neuropathological Changes over Time

Authors: R. Pussinen, V. Jankovic, U. Herzberg, M. Cerrada-Gimenez, T. Huhtala, A. Nurmi, T. Ahtoniemi

Abstract:

Targeted over-expression of human α-synuclein using viral-vector mediated gene delivery into the substantia nigra of rats and non-human primates has been reported to lead to dopaminergic cell loss and the formation of α-synuclein aggregates reminiscent of Lewy bodies. We have previously shown how AAV-mediated expression of α-synuclein is seen in the chronic phenotype of the rats over 16 week follow-up period. In the context of these findings, we attempted to further characterize this long term PD related functional and motor deficits as well as neurochemical and neuropathological changes in AAV-mediated α-synuclein transfection model in rats during chronic follow-up period. Different titers of recombinant AAV expressing human α-synuclein (A53T) were stereotaxically injected unilaterally into substantia nigra of Wistar rats. Rats were allowed to recover for 3 weeks prior to initial baseline behavioral testing with rotational asymmetry test, stepping test and cylinder test. A similar behavioral test battery was applied again at weeks 5, 9,12 and 15. In addition to traditionally used rat PD model tests, MotoRater test system, a high speed kinematic gait performance monitoring was applied during the follow-up period. Evaluation focused on animal gait between groups. Tremor analysis was performed on weeks 9, 12 and 15. In addition to behavioral end-points, neurochemical evaluation of dopamine and its metabolites were evaluated in striatum. Furthermore, integrity of the dopamine active transport (DAT) system was evaluated by using 123I- β-CIT and SPECT/CT imaging on weeks 3, 8 and 12 after AAV- α-synuclein transfection. Histopathology was examined from end-point samples at 3 or 12 weeks after AAV- α-synuclein transfection to evaluate dopaminergic cell viability and microglial (Iba-1) activation status in substantia nigra by using stereological analysis techniques. This study focused on the characterization and validation of previously published AAV- α-synuclein transfection model in rats but with the addition of novel end-points. We present the long term phenotype of AAV- α-synuclein transfected rats with traditionally used behavioral tests but also by using novel fine motor analysis techniques and tremor analysis which provide new insight to unilateral effects of AAV α-synuclein transfection. We also present data about neurochemical and neuropathological end-points for the dopaminergic system in the model and how well they correlate with behavioral phenotype.

Keywords: adeno-associated virus, alphasynuclein, animal model, Parkinson’s disease

Procedia PDF Downloads 295
207 Production of Ferroboron by SHS-Metallurgy from Iron-Containing Rolled Production Wastes for Alloying of Cast Iron

Authors: G. Zakharov, Z. Aslamazashvili, M. Chikhradze, D. Kvaskhvadze, N. Khidasheli, S. Gvazava

Abstract:

Traditional technologies for processing iron-containing industrial waste, including steel-rolling production, are associated with significant energy costs, the long duration of processes, and the need to use complex and expensive equipment. Waste generated during the industrial process negatively affects the environment, but at the same time, it is a valuable raw material and can be used to produce new marketable products. The study of the effectiveness of self-propagating high-temperature synthesis (SHS) methods, which are characterized by the simplicity of the necessary equipment, the purity of the final product, and the high processing speed, is under the wide scientific and practical interest to solve the set problem. The work presents technological aspects of the production of Ferro boron by the method of SHS - metallurgy from iron-containing wastes of rolled production for alloying of cast iron and results of the effect of alloying element on the degree of boron assimilation with liquid cast iron. Features of Fe-B system combustion have been investigated, and the main parameters to control the phase composition of synthesis products have been experimentally established. Effect of overloads on patterns of cast ligatures formation and mechanisms structure formation of SHS products was studied. It has been shown that an increase in the content of hematite Fe₂O₃ in iron-containing waste leads to an increase in the content of phase FeB and, accordingly, the amount of boron in the ligature. Boron content in ligature is within 3-14%, and the phase composition of obtained ligatures consists of Fe₂B and FeB phases. Depending on the initial composition of the wastes, the yield of the end product reaches 91 - 94%, and the extraction of boron is 70 - 88%. Combustion processes of high exothermic mixtures allow to obtain a wide range of boron-containing ligatures from industrial wastes. In view of the relatively low melting point of the obtained SHS-ligature, the positive dynamics of boron absorption by liquid iron is established. According to the obtained data, the degree of absorption of the ligature by alloying gray cast iron at 1450°C is 80-85%. When combined with the treatment of liquid cast iron with magnesium, followed by alloying with the developed ligature, boron losses are reduced by 5-7%. At that, uniform distribution of boron micro-additives in the volume of treated liquid metal is provided. Acknowledgment: This work was supported by Shota Rustaveli Georgian National Science Foundation of Georgia (SRGNSFG) under the GENIE project (grant number № CARYS-19-802).

Keywords: self-propagating high-temperature synthesis, cast iron, industrial waste, ductile iron, structure formation

Procedia PDF Downloads 123
206 Reading and Writing of Biscriptal Children with and Without Reading Difficulties in Two Alphabetic Scripts

Authors: Baran Johansson

Abstract:

This PhD dissertation aimed to explore children’s writing and reading in L1 (Persian) and L2 (Swedish). It adds new perspectives to reading and writing studies of bilingual biscriptal children with and without reading and writing difficulties (RWD). The study used standardised tests to examine linguistic and cognitive skills related to word reading and writing fluency in both languages. Furthermore, all participants produced two texts (one descriptive and one narrative) in each language. The writing processes and the writing product of these children were explored using logging methodologies (Eye and Pen) for both languages. Furthermore, this study investigated how two bilingual children with RWD presented themselves through writing across their languages. To my knowledge, studies utilizing standardised tests and logging tools to investigate bilingual children’s word reading and writing fluency across two different alphabetic scripts are scarce. There have been few studies analysing how bilingual children construct meaning in their writing, and none have focused on children who write in two different alphabetic scripts or those with RWD. Therefore, some aspects of the systemic functional linguistics (SFL) perspective were employed to examine how two participants with RWD created meaning in their written texts in each language. The results revealed that children with and without RWD had higher writing fluency in all measures (e.g. text lengths, writing speed) in their L2 compared to their L1. Word reading abilities in both languages were found to influence their writing fluency. The findings also showed that bilingual children without reading difficulties performed 1 standard deviation below the mean when reading words in Persian. However, their reading performance in Swedish aligned with the expected age norms, suggesting greater efficient in reading Swedish than in Persian. Furthermore, the results showed that the level of orthographic depth, consistency between graphemes and phonemes, and orthographic features can probably explain these differences across languages. The analysis of meaning-making indicated that the participants with RWD exhibited varying levels of difficulty, which influenced their knowledge and usage of writing across languages. For example, the participant with poor word recognition (PWR) presented himself similarly across genres, irrespective of the language in which he wrote. He employed the listing technique similarly across his L1 and L2. However, the participant with mixed reading difficulties (MRD) had difficulties with both transcription and text production. He produced spelling errors and frequently paused in both languages. He also struggled with word retrieval and producing coherent texts, consistent with studies of monolingual children with poor comprehension or with developmental language disorder. The results suggest that the mother tongue instruction provided to the participants has not been sufficient for them to become balanced biscriptal readers and writers in both languages. Therefore, increasing the number of hours dedicated to mother tongue instruction and motivating the children to participate in these classes could be potential strategies to address this issue.

Keywords: reading, writing, reading and writing difficulties, bilingual children, biscriptal

Procedia PDF Downloads 72
205 Comparison of Gait Variability in Individuals with Trans-Tibial and Trans-Femoral Lower Limb Loss: A Pilot Study

Authors: Hilal Keklicek, Fatih Erbahceci, Elif Kirdi, Ali Yalcin, Semra Topuz, Ozlem Ulger, Gul Sener

Abstract:

Objectives and Goals: The stride-to-stride fluctuations in gait is a determinant of qualified locomotion as known as gait variability. Gait variability is an important predictive factor of fall risk and useful for monitoring the effects of therapeutic interventions and rehabilitation. Comparison of gait variability in individuals with trans-tibial lower limb loss and trans femoral lower limb loss was the aim of the study. Methods: Ten individuals with traumatic unilateral trans femoral limb loss(TF), 12 individuals with traumatic transtibial lower limb loss(TT) and 12 healthy individuals(HI) were the participants of the study. All participants were evaluated with treadmill. Gait characteristics including mean step length, step length variability, ambulation index, time on each foot of participants were evaluated with treadmill. Participants were walked at their preferred speed for six minutes. Data from 4th minutes to 6th minutes were selected for statistical analyses to eliminate learning effect. Results: There were differences between the groups in intact limb step length variation, time on each foot, ambulation index and mean age (p < .05) according to the Kruskal Wallis Test. Pairwise analyses showed that there were differences between the TT and TF in residual limb variation (p=.041), time on intact foot (p=.024), time on prosthetic foot(p=.024), ambulation index(p = .003) in favor of TT group. There were differences between the TT and HI group in intact limb variation (p = .002), time on intact foot (p<.001), time on prosthetic foot (p < .001), ambulation index result (p < .001) in favor of HI group. There were differences between the TF and HI group in intact limb variation (p = .001), time on intact foot (p=.01) ambulation index result (p < .001) in favor of HI group. There was difference between the groups in mean age result from HI group were younger (p < .05).There were similarity between the groups in step lengths (p>.05) and time of prosthesis using in individuals with lower limb loss (p > .05). Conclusions: The pilot study provided basic data about gait stability in individuals with traumatic lower limb loss. Results of the study showed that to evaluate the gait differences between in different amputation level, long-range gait analyses methods may be useful to get more valuable information. On the other hand, similarity in step length may be resulted from effective prosthetic using or effective gait rehabilitation, in conclusion, all participants with lower limb loss were already trained. The differences between the TT and HI; TF and HI may be resulted from the age related features, therefore, age matched population in HI were recommended future studies. Increasing the number of participants and comparison of age-matched groups also recommended to generalize these result.

Keywords: lower limb loss, amputee, gait variability, gait analyses

Procedia PDF Downloads 280
204 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 101
203 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model

Authors: Mohammad Zamani, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.

Keywords: circular vertical, spillway, numerical model, boundary conditions

Procedia PDF Downloads 88
202 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products

Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola

Abstract:

The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.

Keywords: decision making, design euristics, product design, product design process, design paradigms

Procedia PDF Downloads 119
201 An A-Star Approach for the Quickest Path Problem with Time Windows

Authors: Christofas Stergianos, Jason Atkin, Herve Morvan

Abstract:

As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.

Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling

Procedia PDF Downloads 231
200 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution

Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino

Abstract:

This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.

Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization

Procedia PDF Downloads 137
199 In-Plume H₂O, CO₂, H₂S and SO₂ in the Fumarolic Field of La Fossa Cone (Vulcano Island, Aeolian Archipelago)

Authors: Cinzia Federico, Gaetano Giudice, Salvatore Inguaggiato, Marco Liuzzo, Maria Pedone, Fabio Vita, Christoph Kern, Leonardo La Pica, Giovannella Pecoraino, Lorenzo Calderone, Vincenzo Francofonte

Abstract:

The periods of increased fumarolic activity at La Fossa volcano have been characterized, since early 80's, by changes in the gas chemistry and in the output rate of fumaroles. Excepting the direct measurements of the steam output from fumaroles performed from 1983 to 1995, the mass output of the single gas species has been recently measured, with various methods, only sporadically or for short periods. Since 2008, a scanning DOAS system is operating in the Palizzi area for the remote measurement of the in-plume SO₂ flux. On these grounds, the need of a cross-comparison of different methods for the in situ measurement of the output rate of different gas species is envisaged. In 2015, two field campaigns have been carried out, aimed at: 1. The mapping of the concentration of CO₂, H₂S and SO₂ in the fumarolic plume at 1 m from the surface, by using specific open-path diode tunable lasers (GasFinder Boreal Europe Ltd.) and an Active DOAS for SO₂, respectively; these measurements, coupled to simultaneous ultrasonic wind speed and meteorological data, have been elaborated to obtain the dispersion map and the output rate of single species in the overall fumarolic field; 2. The mapping of the concentrations of CO₂, H₂S, SO₂, H₂O in the fumarolic plume at 0.5 m from the soil, by using an integrated system, including IR spectrometers and specific electrochemical sensors; this has provided the concentration ratios of the analysed gas species and their distribution in the fumarolic field; 3. The in-fumarole sampling of vapour and measurement of the steam output, to validate the remote measurements. The dispersion map of CO₂, obtained from the tunable laser measurements, shows a maximum CO₂ concentration at 1m from the soil of 1000 ppmv along the rim, and 1800 ppmv in the inner slopes. As observed, the largest contribution derives from a wide fumarole of the inner-slope, despite its present outlet temperature of 230°C, almost 200°C lower than those measured at the rim fumaroles. Actually, fumaroles in the inner slopes are among those emitting the largest amount of magmatic vapour and, during the 1989-1991 crisis, reached the temperature of 690°C. The estimated CO₂ and H₂S fluxes are 400 t/d and 4.4 t/d, respectively. The coeval SO₂ flux, measured by the scanning DOAS system, is 9±1 t/d. The steam output, recomputed from CO₂ flux measurements, is about 2000 t/d. The various direct and remote methods (as described at points 1-3) have produced coherent results, which encourage to the use of daily and automatic DOAS SO₂ data, coupled with periodic in-plume measurements of different acidic gases, to obtain the total mass rates.

Keywords: DOAS, fumaroles, plume, tunable laser

Procedia PDF Downloads 399
198 Movie and Theater Marketing Using the Potentials of Social Networks

Authors: Seyed Reza Naghibulsadat

Abstract:

The nature of communication includes various forms of media productions, which include film and theater. In the current situation, since social networks have emerged, they have brought their own communication capabilities and have features that show speed, public access, lack of media organization and the production of extensive content, and the development of critical thinking; Also, they contain capabilities to develop access to all kinds of media productions, including movies and theater shows; Of course, this works differently in different conditions and communities. In terms of the scale of exploitation, the film has a more general audience, and the theater has a special audience. The film industry is more developed based on more modern technologies, but the theater, based on the older ways of communication, contains more intimate and emotional aspects. ; But in general, the main focus is the development of access to movies and theater shows, which is emphasized by those involved in this field due to the capabilities of social networks. In this research, we will look at these 2 areas and the relevant components for both areas through social networks and also the common points of both types of media production. The main goal of this research is to know the strengths and weaknesses of using social networks for the marketing of movies and theater shows and, at the same time are, also considered the opportunities and threats of this field. The attractions of these two types of media production, with the emergence of social networks, and the ability to change positions, can provide the opportunity to become a media with greater exploitation and higher profitability; But the main consideration is the opinions about these capabilities and the ability to use them for film and theater marketing. The main question of the research is, what are the marketing components for movies and theaters using social media capabilities? What are its strengths and weaknesses? And what opportunities and threats are facing this market? This research has been done with two methods SWOT and meta-analysis. Non-probability sampling has been used with purposeful technique. The results show that a recent approach is an approach based on eliminating threats and weaknesses and emphasizing strengths, and exploiting opportunities in the direction of developing film and theater marketing based on the capabilities of social networks within the framework of local cultural values and presenting achievements on an international scale or It is universal. This introduction leads to the introduction of authentic Iranian culture and foreign enthusiasts in the framework of movies and theater art. Therefore, for this issue, the model for using the capabilities of social networks for movie or theater marketing, according to the results obtained from Respondents, is a model based on SO strategies and, in other words, offensive strategies so that it can take advantage of the internal strengths and made maximum use of foreign situations and opportunities to develop the use of movies and theater performances.

Keywords: marketing, movies, theatrical show, social network potentials

Procedia PDF Downloads 77
197 Marine Environmental Monitoring Using an Open Source Autonomous Marine Surface Vehicle

Authors: U. Pruthviraj, Praveen Kumar R. A. K. Athul, K. V. Gangadharan, S. Rao Shrikantha

Abstract:

An open source based autonomous unmanned marine surface vehicle (UMSV) is developed for some of the marine applications such as pollution control, environmental monitoring and thermal imaging. A double rotomoulded hull boat is deployed which is rugged, tough, quick to deploy and moves faster. It is suitable for environmental monitoring, and it is designed for easy maintenance. A 2HP electric outboard marine motor is used which is powered by a lithium-ion battery and can also be charged from a solar charger. All connections are completely waterproof to IP67 ratings. In full throttle speed, the marine motor is capable of up to 7 kmph. The motor is integrated with an open source based controller using cortex M4F for adjusting the direction of the motor. This UMSV can be operated by three modes: semi-autonomous, manual and fully automated. One of the channels of a 2.4GHz radio link 8 channel transmitter is used for toggling between different modes of the USMV. In this electric outboard marine motor an on board GPS system has been fitted to find the range and GPS positioning. The entire system can be assembled in the field in less than 10 minutes. A Flir Lepton thermal camera core, is integrated with a 64-bit quad-core Linux based open source processor, facilitating real-time capturing of thermal images and the results are stored in a micro SD card which is a data storage device for the system. The thermal camera is interfaced to an open source processor through SPI protocol. These thermal images are used for finding oil spills and to look for people who are drowning at low visibility during the night time. A Real Time clock (RTC) module is attached with the battery to provide the date and time of thermal images captured. For the live video feed, a 900MHz long range video transmitter and receiver is setup by which from a higher power output a longer range of 40miles has been achieved. A Multi-parameter probe is used to measure the following parameters: conductivity, salinity, resistivity, density, dissolved oxygen content, ORP (Oxidation-Reduction Potential), pH level, temperature, water level and pressure (absolute).The maximum pressure it can withstand 160 psi, up to 100m. This work represents a field demonstration of an open source based autonomous navigation system for a marine surface vehicle.

Keywords: open source, autonomous navigation, environmental monitoring, UMSV, outboard motor, multi-parameter probe

Procedia PDF Downloads 242
196 Upward Spread Forced Smoldering Phenomenon: Effects and Applications

Authors: Akshita Swaminathan, Vinayak Malhotra

Abstract:

Smoldering is one of the most persistent types of combustion which can take place for very long periods (hours, days, months) if there is an abundance of fuel. It causes quite a notable number of accidents and is one of the prime suspects for fire and safety hazards. It can be ignited with weaker ignition and is more difficult to suppress than flaming combustion. Upward spread smoldering is the case in which the air flow is parallel to the direction of the smoldering front. This type of smoldering is quite uncontrollable, and hence, there is a need to study this phenomenon. As compared to flaming combustion, a smoldering phenomenon often goes unrecognised and hence is a cause for various fire accidents. A simplified experimental setup was raised to study the upward spread smoldering, its effects due to varying forced flow and its effects when it takes place in the presence of external heat sources and alternative energy sources such as acoustic energy. Linear configurations were studied depending on varying forced flow effects on upward spread smoldering. Effect of varying forced flow on upward spread smoldering was observed and studied: (i) in the presence of external heat source (ii) in the presence of external alternative energy sources (acoustic energy). The role of ash removal was observed and studied. Results indicate that upward spread forced smoldering was affected by various key controlling parameters such as the speed of the forced flow, surface orientation, interspace distance (distance between forced flow and the pilot fuel). When an external heat source was placed on either side of the pilot fuel, it was observed that the smoldering phenomenon was affected. The surface orientation and interspace distance between the external heat sources and the pilot fuel were found to play a huge role in altering the regression rate. Lastly, by impinging an alternative energy source in the form of acoustic energy on the smoldering front, it was observed that varying frequencies affected the smoldering phenomenon in different ways. The surface orientation also played an important role. This project highlights the importance of fire and safety hazard and means of better combustion for all kinds of scientific research and practical applications. The knowledge acquired from this work can be applied to various engineering systems ranging from aircrafts, spacecrafts and even to buildings fires, wildfires and help us in better understanding and hence avoiding such widespread fires. Various fire disasters have been recorded in aircrafts due to small electric short circuits which led to smoldering fires. These eventually caused the engine to catch fire that cost damage to life and property. Studying this phenomenon can help us to control, if not prevent, such disasters.

Keywords: alternative energy sources, flaming combustion, ignition, regression rate, smoldering

Procedia PDF Downloads 145
195 God, The Master Programmer: The Relationship Between God and Computers

Authors: Mohammad Sabbagh

Abstract:

Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.

Keywords: programming, the Quran, object orientation, computers and humans, GOD

Procedia PDF Downloads 107
194 Music as Source Domain: A Cross-Linguistic Exploration of Conceptual Metaphors

Authors: Eleanor Sweeney, Chunyuan Di

Abstract:

The metaphors people use in everyday discourse do not arise randomly; rather, they develop from our physical experiences in our social and cultural environments. Conceptual Metaphor Theory (CMT) explains that through metaphor, we apply our embodied understanding of the physical world to non-material concepts to understand and express abstract concepts. Our most productive source domains derive from our embodied understanding and allow us to develop primary metaphors, and from primary metaphors, an elaborate, creative world of culturally constructed complex metaphors. Cognitive Linguistics researchers draw upon individual embodied experience for primary metaphors. Socioculturally embodied experience through music has long furnished linguistic expressions in diverse languages, as conceptual metaphors or everyday expressions.  Can a socially embodied experience function in the same way as an individually embodied experience in the creation of conceptual metaphors? The authors argue that since music is inherently social and embodied, musical experiences function as a richly motivated source domain. The focus of this study is socially embodied musical experience which is then reflected and expressed through metaphors. This cross-linguistic study explores music as a source domain for metaphors of social alignment in English, French, and Chinese. The authors explored two public discourse sites, Facebook and Linguée, in order to collect linguistic metaphors from three different languages. By conducting this cross-linguistic study, cross-cultural similarities and differences in metaphors for which music is the source domain can be examined. Different musical elements, such as melody, speed, rhythm and harmony, are analyzed for their possible metaphoric meanings of social alignment. Our findings suggest that the general metaphor cooperation is music is a productive metaphor with some subcases, and that correlated social behaviors can be metaphorically expressed with certain elements in music. For example, since performance is a subset of the category behavior, there is a natural mapping from performance in music to behavior in social settings: social alignment is musical performance. Musical performance entails a collective social expectation that exerts control over individual behavior.  When individual behavior does not align with the collective social expectation, music-related expressions are often used to express how the individual is violating social norms. Moreover, when individuals do align their behavior with social norms, similar musical expressions are used. Cooperation is a crucial social value in all cultures, indeed it is a key element of survival, and music provides a coherent, consistent, and rich source domain—one based upon a universal and definitive cultural practice.

Keywords: Chinese, Conceptual Metaphor Theory, cross-linguistic, culturally embodied experience, English, French, metaphor, music

Procedia PDF Downloads 173
193 Public-Private Partnership for Critical Infrastructure Resilience

Authors: Anjula Negi, D. T. V. Raghu Ramaswamy, Rajneesh Sareen

Abstract:

Road infrastructure is emphatically one of the top most critical infrastructure to the Indian economy. Road network in the country of around 3.3 million km is the second largest in the world. Nationwide statistics released by Ministry of Road, Transport and Highways reveal that every minute an accident happens and one death every 3.7 minutes. This reported scale in terms of safety is a matter of grave concern, and economically represents a national loss of 3% to the GDP. Union Budget 2016-17 has allocated USD 12 billion annually for development and strengthening of roads, an increase of 56% from last year. Thus, highlighting the importance of roads as critical infrastructure. National highway alone represent only 1.7% of the total road linkages, however, carry over 40% of traffic. Further, trends analysed from 2002 -2011 on national highways, indicate that in less than a decade, a 22 % increase in accidents have been reported, but, 68% increase in death fatalities. Paramount inference is that accident severity has increased with time. Over these years many measures to increase road safety, lessening damage to physical assets, reducing vulnerabilities leading to a build-up for resilient road infrastructure have been taken. In the context of national highway development program, policy makers proposed implementation of around 20 % of such road length on PPP mode. These roads were taken up on high-density traffic considerations and for qualitative implementation. In order to understand resilience impacts and safety parameters, enshrined in various PPP concession agreements executed with the private sector partners, such highway specific projects would be appraised. This research paper would attempt to assess such safety measures taken and the possible reasons behind an increase in accident severity through these PPP case study projects. Delving further on safety features to understand policy measures adopted in these cases and an introspection on reasons of severity, whether an outcome of increased speeds, faulty road design and geometrics, driver negligence, or due to lack of discipline in following lane traffic with increased speed. Assessment exercise would study these aspects hitherto to PPP and post PPP project structures, based on literature review and opinion surveys with sectoral experts. On the way forward, it is understood that the Ministry of Road, Transport and Highway’s estimate for strengthening the national highway network is USD 77 billion within next five years. The outcome of this paper would provide an understanding of resilience measures adopted, possible options for accessible and safe road network and its expansion to policy makers for possible policy initiatives and funding allocation in securing critical infrastructure.

Keywords: national highways, policy, PPP, safety

Procedia PDF Downloads 258
192 Co-Seismic Deformation Using InSAR Sentinel-1A: Case Study of the 6.5 Mw Pidie Jaya, Aceh, Earthquake

Authors: Jefriza, Habibah Lateh, Saumi Syahreza

Abstract:

The 2016 Mw 6.5 Pidie Jaya earthquake is one of the biggest disasters that has occurred in Aceh within the last five years. This earthquake has caused severe damage to many infrastructures such as schools, hospitals, mosques, and houses in the district of Pidie Jaya and surrounding areas. Earthquakes commonly occur in Aceh Province due to the Aceh-Sumatra is located in the convergent boundaries of the Sunda Plate subducted beneath the Indo-Australian Plate. This convergence is responsible for the intensification of seismicity in this region. The plates are tilted at a speed of 63 mm per year and the right lateral component is accommodated by strike- slip faulting within Sumatra, mainly along the great Sumatran fault. This paper presents preliminary findings of InSAR study aimed at investigating the co-seismic surface deformation pattern in Pidie Jaya, Aceh-Indonesia. Coseismic surface deformation is rapid displacement that occurs at the time of an earthquake. Coseismic displacement mapping is required to study the behavior of seismic faults. InSAR is a powerful tool for measuring Earth surface deformation to a precision of a few centimetres. In this study, two radar images of the same area but at two different times are required to detect changes in the Earth’s surface. The ascending and descending Sentinel-1A (S1A) synthetic aperture radar (SAR) data and Sentinels application platform (SNAP) toolbox were used to generate SAR interferogram image. In order to visualize the InSAR interferometric, the S1A from both master (26 Nov 2016) and slave data-sets (26 Dec 2016) were utilized as the main data source for mapping the coseismic surface deformation. The results show that the fringes of phase difference have appeared in the border region as a result of the movement that was detected with interferometric technique. On the other hand, the dominant fringes pattern also appears near the coastal area, this is consistent with the field investigations two days after the earthquake. However, the study has also limitations of resolution and atmospheric artefacts in SAR interferograms. The atmospheric artefacts are caused by changes in the atmospheric refractive index of the medium, as a result, has limitation to produce coherence image. Low coherence will be affected the result in creating fringes (movement can be detected by fringes). The spatial resolution of the Sentinel satellite has not been sufficient for studying land surface deformation in this area. Further studies will also be investigated using both ALOS and TerraSAR-X. ALOS and TerraSAR-X improved the spatial resolution of SAR satellite.

Keywords: earthquake, InSAR, interferometric, Sentinel-1A

Procedia PDF Downloads 197
191 Cloud Based Supply Chain Traceability

Authors: Kedar J. Mahadeshwar

Abstract:

Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.

Keywords: cloud, pharmaceutical, supply chain, tracking

Procedia PDF Downloads 528
190 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator

Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur

Abstract:

Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.

Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature

Procedia PDF Downloads 110
189 Hybrid Manufacturing System to Produce 3D Structures for Osteochondral Tissue Regeneration

Authors: Pedro G. Morouço

Abstract:

One utmost challenge in Tissue Engineering is the production of 3D constructs capable of mimicking the functional hierarchy of native tissues. This is well stated for osteochondral tissue due to the complex mechanical functional unit based on the junction of articular cartilage and bone. Thus, the aim of the present study was to develop a new additive manufacturing system coupling micro-extrusion with hydrogels printing. An integrated system was developed with 2 main features: (i) the printing of up to three distinct hydrogels; (ii) in coordination with the printing of a thermoplastic structural support. The hydrogel printing module was projected with a ‘revolver-like’ system, where the hydrogel selection was made by a rotating mechanism. The hydrogel deposition was then controlled by pressured air input. The use of specific components approved for medical use was incorporated in the material dispensing system (Nordson EDF Optimum® fluid dispensing system). The thermoplastic extrusion modulus enabled the control of required extrusion temperature through electric resistances in the polymer reservoir and the extrusion system. After testing and upgrades, a hydrogel modulus with 3 syringes (3cm3 capacity each), with a pressure range of 0-2.5bar, a rotational speed of 0-5rpm, and working with needles from 200-800µm was obtained. This modulus was successfully coupled to the extrusion system that presented a temperature up to 300˚C, a pressure range of 0-12bar, and working with nozzles from 200-500µm. The applied motor could provide a velocity range 0-2000mm/min. Although, there are distinct printing requirements for hydrogels and polymers, the novel system could develop hybrid scaffolds, combining the 2 moduli. The morphological analysis showed high reliability (n=5) between the theoretical and obtained filament and pore size (350µm and 300µm vs. 342±4µm and 302±3µm, p>0.05, respectively) of the polymer; and multi-material 3D constructs were successfully obtained. Human tissues present very distinct and complex structures regarding their mechanical properties, organization, composition and dimensions. For osteochondral regenerative medicine, a multiphasic scaffold is required as subchondral bone and overlying cartilage must regenerate at the same time. Thus, a scaffold with 3 layers (bone, intermediate and cartilage parts) can be a promising approach. The developed system may give a suitable solution to construct those hybrid scaffolds with enhanced properties. The present novel system is a step-forward regarding osteochondral tissue engineering due to its ability to generate layered mechanically stable implants through the double-printing of hydrogels with thermoplastics.

Keywords: 3D bioprinting, bone regeneration, cartilage regeneration, regenerative medicine, tissue engineering

Procedia PDF Downloads 167
188 Social Value of Travel Time Savings in Sub-Saharan Africa

Authors: Richard Sogah

Abstract:

The significance of transport infrastructure investments for economic growth and development has been central to the World Bank’s strategy for poverty reduction. Among the conventional surface transport infrastructures, road infrastructure is significant in facilitating the movement of human capital goods and services. When transport projects (i.e., roads, super-highways) are implemented, they come along with some negative social values (costs), such as increased noise and air pollution for local residents living near these facilities, displaced individuals, etc. However, these projects also facilitate better utilization of existing capital stock and generate other observable benefits that can be easily quantified. For example, the improvement or construction of roads creates employment, stimulates revenue generation (toll), reduces vehicle operating costs and accidents, increases accessibility, trade expansion, safety improvement, etc. Aside from these benefits, travel time savings (TTSs) which are the major economic benefits of urban and inter-urban transport projects and therefore integral in the economic assessment of transport projects, are often overlooked and omitted when estimating the benefits of transport projects, especially in developing countries. The absence of current and reliable domestic travel data and the inability of replicated models from the developed world to capture the actual value of travel time savings due to the large unemployment, underemployment, and other labor-induced distortions has contributed to the failure to assign value to travel time savings when estimating the benefits of transport schemes in developing countries. This omission of the value of travel time savings from the benefits of transport projects in developing countries poses problems for investors and stakeholders to either accept or dismiss projects based on schemes that favor reduced vehicular operating costs and other parameters rather than those that ease congestion, increase average speed, facilitate walking and handloading, and thus save travel time. Given the complex reality in the estimation of the value of travel time savings and the presence of widespread informal labour activities in Sub-Saharan Africa, we construct a “nationally ranked distribution of time values” and estimate the value of travel time savings based on the area beneath the distribution. Compared with other approaches, our method captures both formal sector workers and individuals/people who work outside the formal sector and hence changes in their time allocation occur in the informal economy and household production activities. The dataset for the estimations is sourced from the World Bank, the International Labour Organization, etc.

Keywords: road infrastructure, transport projects, travel time savings, congestion, Sub-Sahara Africa

Procedia PDF Downloads 110
187 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise

Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou

Abstract:

Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.

Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction

Procedia PDF Downloads 123
186 Chemical Technology Approach for Obtaining Carbon Structures Containing Reinforced Ceramic Materials Based on Alumina

Authors: T. Kuchukhidze, N. Jalagonia, T. Archuadze, G. Bokuchava

Abstract:

The growing scientific-technological progress in modern civilization causes actuality of producing construction materials which can successfully work in conditions of high temperature, radiation, pressure, speed, and chemically aggressive environment. Such extreme conditions can withstand very few types of materials and among them, ceramic materials are in the first place. Corundum ceramics is the most useful material for creation of constructive nodes and products of various purposes for its low cost, easy accessibility to raw materials and good combination of physical-chemical properties. However, ceramic composite materials have one disadvantage; they are less plastics and have lower toughness. In order to increase the plasticity, the ceramics are reinforced by various dopants, that reduces the growth of the cracks. It is shown, that adding of even small amount of carbon fibers and carbon nanotubes (CNT) as reinforcing material significantly improves mechanical properties of the products, keeping at the same time advantages of alundum ceramics. Graphene in composite material acts in the same way as inorganic dopants (MgO, ZrO2, SiC and others) and performs the role of aluminum oxide inhibitor, as it creates shell, that gives possibility to reduce sintering temperature and at the same time it acts as damper, because scattering of a shock wave takes place on carbon structures. Application of different structural modification of carbon (graphene, nanotube and others) as reinforced material, gives possibility to create multi-purpose highly requested composite materials based on alundum ceramics. In the present work offers simplified technology for obtaining of aluminum oxide ceramics, reinforced with carbon nanostructures, during which chemical modification with doping carbon nanostructures will be implemented in the process of synthesis of final powdery composite – Alumina. In charge doping carbon nanostructures connected to matrix substance with C-O-Al bonds, that provide their homogeneous spatial distribution. In ceramic obtained as a result of consolidation of such powders carbon fragments equally distributed in the entire matrix of aluminum oxide, that cause increase of bending strength and crack-resistance. The proposed way to prepare the charge simplifies the technological process, decreases energy consumption, synthesis duration and therefore requires less financial expenses. In the implementation of this work, modern instrumental methods were used: electronic and optical microscopy, X-ray structural and granulometric analysis, UV, IR, and Raman spectroscopy.

Keywords: ceramic materials, α-Al₂O₃, carbon nanostructures, composites, characterization, hot-pressing

Procedia PDF Downloads 121
185 Electret: A Solution of Partial Discharge in High Voltage Applications

Authors: Farhina Haque, Chanyeop Park

Abstract:

The high efficiency, high field, and high power density provided by wide bandgap (WBG) semiconductors and advanced power electronic converter (PEC) topologies enabled the dynamic control of power in medium to high voltage systems. Although WBG semiconductors outperform the conventional Silicon based devices in terms of voltage rating, switching speed, and efficiency, the increased voltage handling properties, high dv/dt, and compact device packaging increase local electric fields, which are the main causes of partial discharge (PD) in the advanced medium and high voltage applications. PD, which occurs actively in voids, triple points, and airgaps, is an inevitable dielectric challenge that causes insulation and device aging. The aging process accelerates over time and eventually leads to the complete failure of the applications. Hence, it is critical to mitigating PD. Sharp edges, airgaps, triple points, and bubbles are common defects that exist in any medium to high voltage device. The defects are created during the manufacturing processes of the devices and are prone to high-electric-field-induced PD due to the low permittivity and low breakdown strength of the gaseous medium filling the defects. A contemporary approach of mitigating PD by neutralizing electric fields in high power density applications is introduced in this study. To neutralize the locally enhanced electric fields that occur around the triple points, airgaps, sharp edges, and bubbles, electrets are developed and incorporated into high voltage applications. Electrets are electric fields emitting dielectric materials that are embedded with electrical charges on the surface and in bulk. In this study, electrets are fabricated by electrically charging polyvinylidene difluoride (PVDF) films based on the widely used triode corona discharge method. To investigate the PD mitigation performance of the fabricated electret films, a series of PD experiments are conducted on both the charged and uncharged PVDF films under square voltage stimuli that represent PWM waveform. In addition to the use of single layer electrets, multiple layers of electrets are also experimented with to mitigate PD caused by higher system voltages. The electret-based approach shows great promise in mitigating PD by neutralizing the local electric field. The results of the PD measurements suggest that the development of an ultimate solution to the decades-long dielectric challenge would be possible with further developments in the fabrication process of electrets.

Keywords: electrets, high power density, partial discharge, triode corona discharge

Procedia PDF Downloads 203
184 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.

Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy

Procedia PDF Downloads 111
183 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 114