Search results for: traveling speed
216 Compression-Extrusion Test to Assess Texture of Thickened Liquids for Dysphagia
Authors: Jesus Salmeron, Carmen De Vega, Maria Soledad Vicente, Mireia Olabarria, Olaia Martinez
Abstract:
Dysphagia or difficulty in swallowing affects mostly elder people: 56-78% of the institutionalized and 44% of the hospitalized. Liquid food thickening is a necessary measure in this situation because it reduces the risk of penetration-aspiration. Until now, and as proposed by the American Dietetic Association in 2002, possible consistencies have been categorized in three groups attending to their viscosity: nectar (50-350 mPa•s), honey (350-1750 mPa•s) and pudding (>1750 mPa•s). The adequate viscosity level should be identified for every patient, according to her/his impairment. Nevertheless, a systematic review on dysphagia diet performed recently indicated that there is no evidence to suggest that there is any transition of clinical relevance between the three levels proposed. It was also stated that other physical properties of the bolus (slipperiness, density or cohesiveness, among others) could influence swallowing in affected patients and could contribute to the amount of remaining residue. Texture parameters need to be evaluated as possible alternative to viscosity. The aim of this study was to evaluate the instrumental extrusion-compression test as a possible tool to characterize changes along time in water thickened with various products and in the three theoretical consistencies. Six commercial thickeners were used: NM® (NM), Multi-thick® (M), Nutilis Powder® (Nut), Resource® (R), Thick&Easy® (TE) and Vegenat® (V). All of them with a modified starch base. Only one of them, Nut, also had a 6,4% of gum (guar, tara and xanthan). They were prepared as indicated in the instructions of each product and dispensing the correspondent amount for nectar, honey and pudding consistencies in 300 mL of tap water at 18ºC-20ºC. The mixture was stirred for about 30 s. Once it was homogeneously spread, it was dispensed in 30 mL plastic glasses; always to the same height. Each of these glasses was used as a measuring point. Viscosity was measured using a rotational viscometer (ST-2001, Selecta, Barcelona). Extrusion-compression test was performed using a TA.XT2i texture analyzer (Stable Micro Systems, UK) with a 25 mm diameter cylindrical probe (SMSP/25). Penetration distance was set at 10 mm and a speed of 3 mm/s. Measurements were made at 1, 5, 10, 20, 30, 40, 50 and 60 minutes from the moment samples were mixed. From the force (g)–time (s) curves obtained in the instrumental assays, maximum force peak (F) was chosen a reference parameter. Viscosity (mPa•s) and F (g) showed to be highly correlated and had similar development along time, following time-dependent quadratic models. It was possible to predict viscosity using F as an independent variable, as they were linearly correlated. In conclusion, compression-extrusion test could be an alternative and a useful tool to assess physical characteristics of thickened liquids.Keywords: compression-extrusion test, dysphagia, texture analyzer, thickener
Procedia PDF Downloads 369215 Exploring the Correlation between Population Distribution and Urban Heat Island under Urban Data: Taking Shenzhen Urban Heat Island as an Example
Authors: Wang Yang
Abstract:
Shenzhen is a modern city of China's reform and opening-up policy, the development of urban morphology has been established on the administration of the Chinese government. This city`s planning paradigm is primarily affected by the spatial structure and human behavior. The subjective urban agglomeration center is divided into several groups and centers. In comparisons of this effect, the city development law has better to be neglected. With the continuous development of the internet, extensive data technology has been introduced in China. Data mining and data analysis has become important tools in municipal research. Data mining has been utilized to improve data cleaning such as receiving business data, traffic data and population data. Prior to data mining, government data were collected by traditional means, then were analyzed using city-relationship research, delaying the timeliness of urban development, especially for the contemporary city. Data update speed is very fast and based on the Internet. The city's point of interest (POI) in the excavation serves as data source affecting the city design, while satellite remote sensing is used as a reference object, city analysis is conducted in both directions, the administrative paradigm of government is broken and urban research is restored. Therefore, the use of data mining in urban analysis is very important. The satellite remote sensing data of the Shenzhen city in July 2018 were measured by the satellite Modis sensor and can be utilized to perform land surface temperature inversion, and analyze city heat island distribution of Shenzhen. This article acquired and classified the data from Shenzhen by using Data crawler technology. Data of Shenzhen heat island and interest points were simulated and analyzed in the GIS platform to discover the main features of functional equivalent distribution influence. Shenzhen is located in the east-west area of China. The city’s main streets are also determined according to the direction of city development. Therefore, it is determined that the functional area of the city is also distributed in the east-west direction. The urban heat island can express the heat map according to the functional urban area. Regional POI has correspondence. The research result clearly explains that the distribution of the urban heat island and the distribution of urban POIs are one-to-one correspondence. Urban heat island is primarily influenced by the properties of the underlying surface, avoiding the impact of urban climate. Using urban POIs as analysis object, the distribution of municipal POIs and population aggregation are closely connected, so that the distribution of the population corresponded with the distribution of the urban heat island.Keywords: POI, satellite remote sensing, the population distribution, urban heat island thermal map
Procedia PDF Downloads 105214 The Development of Traffic Devices Using Natural Rubber in Thailand
Authors: Weeradej Cheewapattananuwong, Keeree Srivichian, Godchamon Somchai, Wasin Phusanong, Nontawat Yoddamnern
Abstract:
Natural rubber used for traffic devices in Thailand has been developed and researched for several years. When compared with Dry Rubber Content (DRC), the quality of Rib Smoked Sheet (RSS) is better. However, the cost of admixtures, especially CaCO₃ and sulphur, is higher than the cost of RSS itself. In this research, Flexible Guideposts and Rubber Fender Barriers (RFB) are taken into consideration. In case of flexible guideposts, the materials used are both RSS and DRC60%, but for RFB, only RSS is used due to the controlled performance tests. The objective of flexible guideposts and RFB is to decrease a number of accidents, fatal rates, and serious injuries. Functions of both devices are to save road users and vehicles as well as to absorb impact forces from vehicles so as to decrease of serious road accidents. This leads to the mitigation methods to remedy the injury of motorists, form severity to moderate one. The solution is to find the best practice of traffic devices using natural rubber under the engineering concepts. In addition, the performances of materials, such as tensile strength and durability, are calculated for the modulus of elasticity and properties. In the laboratory, the simulation of crashes, finite element of materials, LRFD, and concrete technology methods are taken into account. After calculation, the trials' compositions of materials are mixed and tested in the laboratory. The tensile test, compressive test, and weathering or durability test are followed and based on ASTM. Furthermore, the Cycle-Repetition Test of Flexible Guideposts will be taken into consideration. The final decision is to fabricate all materials and have a real test section in the field. In RFB test, there will be 13 crash tests, 7 Pickup Truck tests, and 6 Motorcycle Tests. The test of vehicular crashes happens for the first time in Thailand, applying the trial and error methods; for example, the road crash test under the standard of NCHRP-TL3 (100 kph) is changed to the MASH 2016. This is owing to the fact that MASH 2016 is better than NCHRP in terms of speed, types, and weight of vehicles and the angle of crash. In the processes of MASH, Test Level 6 (TL-6), which is composed of 2,270 kg Pickup Truck, 100 kph, and 25 degree of crash-angle is selected. The final test for real crash will be done, and the whole system will be evaluated again in Korea. The researchers hope that the number of road accidents will decrease, and Thailand will be no more in the top tenth ranking of road accidents in the world.Keywords: LRFD, load and resistance factor design, ASTM, american society for testing and materials, NCHRP, national cooperation highway research program, MASH, manual for assessing safety hardware
Procedia PDF Downloads 130213 Flow-Induced Vibration Marine Current Energy Harvesting Using a Symmetrical Balanced Pair of Pivoted Cylinders
Authors: Brad Stappenbelt
Abstract:
The phenomenon of vortex-induced vibration (VIV) for elastically restrained cylindrical structures in cross-flows is relatively well investigated. The utility of this mechanism in harvesting energy from marine current and tidal flows is however arguably still in its infancy. With relatively few moving components, a flow-induced vibration-based energy conversion device augers low complexity compared to the commonly employed turbine design. Despite the interest in this concept, a practical device has yet to emerge. It is desirable for optimal system performance to design for a very low mass or mass moment of inertia ratio. The device operating range, in particular, is maximized below the vortex-induced vibration critical point where an infinite resonant response region is realized. An unfortunate consequence of this requirement is large buoyancy forces that need to be mitigated by gravity-based, suction-caisson or anchor mooring systems. The focus of this paper is the testing of a novel VIV marine current energy harvesting configuration that utilizes a symmetrical and balanced pair of horizontal pivoted cylinders. The results of several years of experimental investigation, utilizing the University of Wollongong fluid mechanics laboratory towing tank, are analyzed and presented. A reduced velocity test range of 0 to 60 was covered across a large array of device configurations. In particular, power take-off damping ratios spanning from 0.044 to critical damping were examined in order to determine the optimal conditions and hence the maximum device energy conversion efficiency. The experiments conducted revealed acceptable energy conversion efficiencies of around 16% and desirable low flow-speed operating ranges when compared to traditional turbine technology. The potentially out-of-phase spanwise VIV cells on each arm of the device synchronized naturally as no decrease in amplitude response and comparable energy conversion efficiencies to the single cylinder arrangement were observed. In addition to the spatial design benefits related to the horizontal device orientation, the main advantage demonstrated by the current symmetrical horizontal configuration is to allow large velocity range resonant response conditions without the excessive buoyancy. The novel configuration proposed shows clear promise in overcoming many of the practical implementation issues related to flow-induced vibration marine current energy harvesting.Keywords: flow-induced vibration, vortex-induced vibration, energy harvesting, tidal energy
Procedia PDF Downloads 148212 Analyzing Bridge Response to Wind Loads and Optimizing Design for Wind Resistance and Stability
Authors: Abdul Haq
Abstract:
The goal of this research is to better understand how wind loads affect bridges and develop strategies for designing bridges that are more stable and resistant to wind. The effect of wind on bridges is essential to their safety and functionality, especially in areas that are prone to high wind speeds or violent wind conditions. The study looks at the aerodynamic forces and vibrations caused by wind and how they affect bridge construction. Part of the research method involves first understanding the underlying ideas influencing wind flow near bridges. Computational fluid dynamics (CFD) simulations are used to model and forecast the aerodynamic behaviour of bridges under different wind conditions. These models incorporate several factors, such as wind directionality, wind speed, turbulence intensity, and the influence of nearby structures or topography. The results provide significant new insights into the loads and pressures that wind places on different bridge elements, such as decks, pylons, and connections. Following the determination of the wind loads, the structural response of bridges is assessed. By simulating their dynamic behavior under wind-induced forces, Finite Element Analysis (FEA) is used to model the bridge's component parts. This work contributes to the understanding of which areas are at risk of experiencing excessive stresses, vibrations, or oscillations due to wind excitations. Because the bridge has inherent modes and frequencies, the study considers both static and dynamic responses. Various strategies are examined to maximize the design of bridges to withstand wind. It is possible to alter the bridge's geometry, add aerodynamic components, add dampers or tuned mass dampers to lessen vibrations, and boost structural rigidity. Through an analysis of several design modifications and their effectiveness, the study aims to offer guidelines and recommendations for wind-resistant bridge design. In addition to the numerical simulations and analyses, there are experimental studies. In order to assess the computational models and validate the practicality of proposed design strategies, scaled bridge models are tested in a wind tunnel. These investigations help to improve numerical models and prediction precision by providing valuable information on wind-induced forces, pressures, and flow patterns. Using a combination of numerical models, actual testing, and long-term performance evaluation, the project aims to offer practical insights and recommendations for building wind-resistant bridges that are secure, long-lasting, and comfortable for users.Keywords: wind effects, aerodynamic forces, computational fluid dynamics, finite element analysis
Procedia PDF Downloads 67211 Tribological Behaviour of the Degradation Process of Additive Manufactured Stainless Steel 316L
Authors: Yunhan Zhang, Xiaopeng Li, Zhongxiao Peng
Abstract:
Additive manufacturing (AM) possesses several key characteristics, including high design freedom, energy-efficient manufacturing process, reduced material waste, high resolution of finished products, and excellent performance of finished products. These advantages have garnered widespread attention and fueled rapid development in recent decades. AM has significantly broadened the spectrum of available materials in the manufacturing industry and is gradually replacing some traditionally manufactured parts. Similar to components produced via traditional methods, products manufactured through AM are susceptible to degradation caused by wear during their service life. Given the prevalence of 316L stainless steel (SS) parts and the limited research on the tribological behavior of 316L SS samples or products fabricated using AM technology, this study aims to investigate the degradation process and wear mechanisms of 316L SS disks fabricated using AM technology. The wear mechanisms and tribological performance of these AM-manufactured samples are compared with commercial 316L SS samples made using conventional methods. Additionally, methods to enhance the tribological performance of additive-manufactured SS samples are explored. Four disk samples with a diameter of 75 mm and a thickness of 10 mm are prepared. Two of them (Group A) are prepared from a purchased SS bar using a milling method. The other two disks (Group B), with the same dimensions, are made of Gas Atomized 316L Stainless Steel (size range: 15-45 µm) purchased from Carpenter Additive and produced using Laser Powder Bed Fusion (LPBF). Pin-on-disk tests are conducted on these disks, which have similar surface roughness and hardness levels. Multiple tests are carried out under various operating conditions, including varying loads and/or speeds, and the friction coefficients are measured during these tests. In addition, the evolution of the surface degradation processes is monitored by creating moulds of the wear tracks and quantitatively analyzing the surface morphologies of the mould images. This analysis involves quantifying the depth and width of the wear tracks and analyzing the wear debris generated during the wear processes. The wear mechanisms and wear performance of these two groups of SS samples are compared. The effects of load and speed on the friction coefficient and wear rate are investigated. The ultimate goal is to gain a better understanding of the surface degradation of additive-manufactured SS samples. This knowledge is crucial for enhancing their anti-wear performance and extending their service life.Keywords: degradation process, additive manufacturing, stainless steel, surface features
Procedia PDF Downloads 79210 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil
Authors: Ana Julia C. Kfouri
Abstract:
A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort
Procedia PDF Downloads 388209 A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages
Authors: Olivia A. Wilson, Hannah E. Power, Murray Kendall
Abstract:
Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.Keywords: emergency management, sydney, tide-tsunami interaction, tsunami impact
Procedia PDF Downloads 242208 In Vivo Evaluation of Exposure to Electromagnetic Fields at 27 GHz (5G) of Danio Rerio: A Preliminary Study
Authors: Elena Maria Scalisi, Roberta Pecoraro, Martina Contino, Sara Ignoto, Carmelo Iaria, Santi Concetto Pavone, Gino Sorbello, Loreto Di Donato, Maria Violetta Brundo
Abstract:
5G Technology is evolving to satisfy a variety of service requirements that may allow high data-rate connections (1Gbps) and lower latency times than current (<1ms). In order to support a high data transmission speed and a high traffic service for eMBB (enhanced mobile broadband) use cases, 5G systems have the characteristic of using different frequency bands of the radio wave spectrum (700 MHz, 3.6-3.8 GHz and 26.5-27.5 GHz), thus taking advantage of higher frequencies than previous mobile radio generations (1G-4G). However, waves at higher frequencies have a lower capacity to propagate in free space and therefore, in order to guarantee the capillary coverage of the territory for high reliability applications, it will be necessary to install a large number of repeaters. Following the introduction of this new technology, there has been growing concern over the past few months about possible harmful effects on human health. The aim of this preliminary study is to evaluate possible short term effects induced by 5G-millimeter waves on embryonic development and early life stages of Danio rerio by Z-FET. We exposed developing zebrafish at frequency of 27 GHz, with a standard pyramidal horn antenna placed at 15 cm far from the samples holder ensuring an incident power density of 10 mW/cm2. During the exposure cycle, from 6 h post fertilization (hpf) to 96 hpf, we measured a different morphological endpoints every 24 hours. Zebrafish embryo toxicity test (Z-FET) is a short term test, carried out on fertilized eggs of zebrafish and it represents an effective alternative to acute test with adult fish (OECD, 2013). We have observed that 5G did not reveal significant impacts on mortality nor on morphology because exposed larvae showed a normal detachment of the tail, presence of heartbeat, well-organized somites, therefore hatching rate was lower than untreated larvae even at 48 h of exposure. Moreover, the immunohistochemical analysis performed on larvae showed a negativity to the HSP-70 expression used as a biomarkers. This is a preliminary study on evaluation of potential toxicity induced by 5G and it seems appropriate to underline the importance that further studies would take, aimed at clarifying the probable real risk of exposure to electromagnetic fields.Keywords: Biomarker of exposure, embryonic development, 5G waves, zebrafish embryo toxicity test
Procedia PDF Downloads 131207 A Comparison of Proxemics and Postural Head Movements during Pop Music versus Matched Music Videos
Authors: Harry J. Witchel, James Ackah, Carlos P. Santos, Nachiappan Chockalingam, Carina E. I. Westling
Abstract:
Introduction: Proxemics is the study of how people perceive and use space. It is commonly proposed that when people like or engage with a person/object, they will move slightly closer to it, often quite subtly and subconsciously. Music videos are known to add entertainment value to a pop song. Our hypothesis was that by adding appropriately matched video to a pop song, it would lead to a net approach of the head to the monitor screen compared to simply listening to an audio-only version of the song. Methods: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5 x 27 cm monitor two musical stimuli in a counterbalanced order; all stimuli were based on music videos by the band OK Go: Here It Goes Again (HIGA, boredom ratings (0-100) = 15.00 ± 4.76, mean ± SEM, standard-error-of-the-mean) and Do What You Want (DWYW, boredom ratings = 23.93 ± 5.98), which did not differ in boredom elicited (P = 0.21, rank-sum test). Each participant experienced each song only once, and one song (counterbalanced) as audio-only versus the other song as a music video. The movement was measured by video-tracking using Kinovea 0.8, based on recording from a lateral aspect; before beginning, each participant had a reflective motion tracking marker placed on the outer canthus of the left eye. Analysis of the Kinovea X-Y coordinate output in comma-separated-variables format was performed in Matlab, as were non-parametric statistical tests. Results: We found that the audio-only stimuli (combined for both HIGA and DWYW, mean ± SEM, 35.71 ± 5.36) were significantly more boring than the music video versions (19.46 ± 3.83, P = 0.0066 Wilcoxon Signed Rank Test (WSRT), Cohen's d = 0.658, N = 28). We also found that participants' heads moved around twice as much during the audio-only versions (speed = 0.590 ± 0.095 mm/sec) compared to the video versions (0.301 ± 0.063 mm/sec, P = 0.00077, WSRT). However, the participants' mean head-to-screen distances were not detectably smaller (i.e. head closer to the screen) during the music videos (74.4 ± 1.8 cm) compared to the audio-only stimuli (73.9 ± 1.8 cm, P = 0.37, WSRT). If anything, during the audio-only condition, they were slightly closer. Interestingly, the ranges of the head-to-screen distances were smaller during the music video (8.6 ± 1.4 cm) compared to the audio-only (12.9 ± 1.7 cm, P = 0.0057, WSRT), the standard deviations were also smaller (P = 0.0027, WSRT), and their heads were held 7 mm higher (video 116.1 ± 0.8 vs. audio-only 116.8 ± 0.8 cm above floor, P = 0.049, WSRT). Discussion: As predicted, sitting and listening to experimenter-selected pop music was more boring than when the music was accompanied by a matched, professionally-made video. However, we did not find that the proxemics of the situation led to approaching the screen. Instead, adding video led to efforts to control the head to a more central and upright viewing position and to suppress head fidgeting.Keywords: boredom, engagement, music videos, posture, proxemics
Procedia PDF Downloads 167206 AAV-Mediated Human Α-Synuclein Expression in a Rat Model of Parkinson's Disease –Further Characterization of PD Phenotype, Fine Motor Functional Effects as Well as Neurochemical and Neuropathological Changes over Time
Authors: R. Pussinen, V. Jankovic, U. Herzberg, M. Cerrada-Gimenez, T. Huhtala, A. Nurmi, T. Ahtoniemi
Abstract:
Targeted over-expression of human α-synuclein using viral-vector mediated gene delivery into the substantia nigra of rats and non-human primates has been reported to lead to dopaminergic cell loss and the formation of α-synuclein aggregates reminiscent of Lewy bodies. We have previously shown how AAV-mediated expression of α-synuclein is seen in the chronic phenotype of the rats over 16 week follow-up period. In the context of these findings, we attempted to further characterize this long term PD related functional and motor deficits as well as neurochemical and neuropathological changes in AAV-mediated α-synuclein transfection model in rats during chronic follow-up period. Different titers of recombinant AAV expressing human α-synuclein (A53T) were stereotaxically injected unilaterally into substantia nigra of Wistar rats. Rats were allowed to recover for 3 weeks prior to initial baseline behavioral testing with rotational asymmetry test, stepping test and cylinder test. A similar behavioral test battery was applied again at weeks 5, 9,12 and 15. In addition to traditionally used rat PD model tests, MotoRater test system, a high speed kinematic gait performance monitoring was applied during the follow-up period. Evaluation focused on animal gait between groups. Tremor analysis was performed on weeks 9, 12 and 15. In addition to behavioral end-points, neurochemical evaluation of dopamine and its metabolites were evaluated in striatum. Furthermore, integrity of the dopamine active transport (DAT) system was evaluated by using 123I- β-CIT and SPECT/CT imaging on weeks 3, 8 and 12 after AAV- α-synuclein transfection. Histopathology was examined from end-point samples at 3 or 12 weeks after AAV- α-synuclein transfection to evaluate dopaminergic cell viability and microglial (Iba-1) activation status in substantia nigra by using stereological analysis techniques. This study focused on the characterization and validation of previously published AAV- α-synuclein transfection model in rats but with the addition of novel end-points. We present the long term phenotype of AAV- α-synuclein transfected rats with traditionally used behavioral tests but also by using novel fine motor analysis techniques and tremor analysis which provide new insight to unilateral effects of AAV α-synuclein transfection. We also present data about neurochemical and neuropathological end-points for the dopaminergic system in the model and how well they correlate with behavioral phenotype.Keywords: adeno-associated virus, alphasynuclein, animal model, Parkinson’s disease
Procedia PDF Downloads 295205 Production of Ferroboron by SHS-Metallurgy from Iron-Containing Rolled Production Wastes for Alloying of Cast Iron
Authors: G. Zakharov, Z. Aslamazashvili, M. Chikhradze, D. Kvaskhvadze, N. Khidasheli, S. Gvazava
Abstract:
Traditional technologies for processing iron-containing industrial waste, including steel-rolling production, are associated with significant energy costs, the long duration of processes, and the need to use complex and expensive equipment. Waste generated during the industrial process negatively affects the environment, but at the same time, it is a valuable raw material and can be used to produce new marketable products. The study of the effectiveness of self-propagating high-temperature synthesis (SHS) methods, which are characterized by the simplicity of the necessary equipment, the purity of the final product, and the high processing speed, is under the wide scientific and practical interest to solve the set problem. The work presents technological aspects of the production of Ferro boron by the method of SHS - metallurgy from iron-containing wastes of rolled production for alloying of cast iron and results of the effect of alloying element on the degree of boron assimilation with liquid cast iron. Features of Fe-B system combustion have been investigated, and the main parameters to control the phase composition of synthesis products have been experimentally established. Effect of overloads on patterns of cast ligatures formation and mechanisms structure formation of SHS products was studied. It has been shown that an increase in the content of hematite Fe₂O₃ in iron-containing waste leads to an increase in the content of phase FeB and, accordingly, the amount of boron in the ligature. Boron content in ligature is within 3-14%, and the phase composition of obtained ligatures consists of Fe₂B and FeB phases. Depending on the initial composition of the wastes, the yield of the end product reaches 91 - 94%, and the extraction of boron is 70 - 88%. Combustion processes of high exothermic mixtures allow to obtain a wide range of boron-containing ligatures from industrial wastes. In view of the relatively low melting point of the obtained SHS-ligature, the positive dynamics of boron absorption by liquid iron is established. According to the obtained data, the degree of absorption of the ligature by alloying gray cast iron at 1450°C is 80-85%. When combined with the treatment of liquid cast iron with magnesium, followed by alloying with the developed ligature, boron losses are reduced by 5-7%. At that, uniform distribution of boron micro-additives in the volume of treated liquid metal is provided. Acknowledgment: This work was supported by Shota Rustaveli Georgian National Science Foundation of Georgia (SRGNSFG) under the GENIE project (grant number № CARYS-19-802).Keywords: self-propagating high-temperature synthesis, cast iron, industrial waste, ductile iron, structure formation
Procedia PDF Downloads 123204 Reading and Writing of Biscriptal Children with and Without Reading Difficulties in Two Alphabetic Scripts
Authors: Baran Johansson
Abstract:
This PhD dissertation aimed to explore children’s writing and reading in L1 (Persian) and L2 (Swedish). It adds new perspectives to reading and writing studies of bilingual biscriptal children with and without reading and writing difficulties (RWD). The study used standardised tests to examine linguistic and cognitive skills related to word reading and writing fluency in both languages. Furthermore, all participants produced two texts (one descriptive and one narrative) in each language. The writing processes and the writing product of these children were explored using logging methodologies (Eye and Pen) for both languages. Furthermore, this study investigated how two bilingual children with RWD presented themselves through writing across their languages. To my knowledge, studies utilizing standardised tests and logging tools to investigate bilingual children’s word reading and writing fluency across two different alphabetic scripts are scarce. There have been few studies analysing how bilingual children construct meaning in their writing, and none have focused on children who write in two different alphabetic scripts or those with RWD. Therefore, some aspects of the systemic functional linguistics (SFL) perspective were employed to examine how two participants with RWD created meaning in their written texts in each language. The results revealed that children with and without RWD had higher writing fluency in all measures (e.g. text lengths, writing speed) in their L2 compared to their L1. Word reading abilities in both languages were found to influence their writing fluency. The findings also showed that bilingual children without reading difficulties performed 1 standard deviation below the mean when reading words in Persian. However, their reading performance in Swedish aligned with the expected age norms, suggesting greater efficient in reading Swedish than in Persian. Furthermore, the results showed that the level of orthographic depth, consistency between graphemes and phonemes, and orthographic features can probably explain these differences across languages. The analysis of meaning-making indicated that the participants with RWD exhibited varying levels of difficulty, which influenced their knowledge and usage of writing across languages. For example, the participant with poor word recognition (PWR) presented himself similarly across genres, irrespective of the language in which he wrote. He employed the listing technique similarly across his L1 and L2. However, the participant with mixed reading difficulties (MRD) had difficulties with both transcription and text production. He produced spelling errors and frequently paused in both languages. He also struggled with word retrieval and producing coherent texts, consistent with studies of monolingual children with poor comprehension or with developmental language disorder. The results suggest that the mother tongue instruction provided to the participants has not been sufficient for them to become balanced biscriptal readers and writers in both languages. Therefore, increasing the number of hours dedicated to mother tongue instruction and motivating the children to participate in these classes could be potential strategies to address this issue.Keywords: reading, writing, reading and writing difficulties, bilingual children, biscriptal
Procedia PDF Downloads 72203 Comparison of Gait Variability in Individuals with Trans-Tibial and Trans-Femoral Lower Limb Loss: A Pilot Study
Authors: Hilal Keklicek, Fatih Erbahceci, Elif Kirdi, Ali Yalcin, Semra Topuz, Ozlem Ulger, Gul Sener
Abstract:
Objectives and Goals: The stride-to-stride fluctuations in gait is a determinant of qualified locomotion as known as gait variability. Gait variability is an important predictive factor of fall risk and useful for monitoring the effects of therapeutic interventions and rehabilitation. Comparison of gait variability in individuals with trans-tibial lower limb loss and trans femoral lower limb loss was the aim of the study. Methods: Ten individuals with traumatic unilateral trans femoral limb loss(TF), 12 individuals with traumatic transtibial lower limb loss(TT) and 12 healthy individuals(HI) were the participants of the study. All participants were evaluated with treadmill. Gait characteristics including mean step length, step length variability, ambulation index, time on each foot of participants were evaluated with treadmill. Participants were walked at their preferred speed for six minutes. Data from 4th minutes to 6th minutes were selected for statistical analyses to eliminate learning effect. Results: There were differences between the groups in intact limb step length variation, time on each foot, ambulation index and mean age (p < .05) according to the Kruskal Wallis Test. Pairwise analyses showed that there were differences between the TT and TF in residual limb variation (p=.041), time on intact foot (p=.024), time on prosthetic foot(p=.024), ambulation index(p = .003) in favor of TT group. There were differences between the TT and HI group in intact limb variation (p = .002), time on intact foot (p<.001), time on prosthetic foot (p < .001), ambulation index result (p < .001) in favor of HI group. There were differences between the TF and HI group in intact limb variation (p = .001), time on intact foot (p=.01) ambulation index result (p < .001) in favor of HI group. There was difference between the groups in mean age result from HI group were younger (p < .05).There were similarity between the groups in step lengths (p>.05) and time of prosthesis using in individuals with lower limb loss (p > .05). Conclusions: The pilot study provided basic data about gait stability in individuals with traumatic lower limb loss. Results of the study showed that to evaluate the gait differences between in different amputation level, long-range gait analyses methods may be useful to get more valuable information. On the other hand, similarity in step length may be resulted from effective prosthetic using or effective gait rehabilitation, in conclusion, all participants with lower limb loss were already trained. The differences between the TT and HI; TF and HI may be resulted from the age related features, therefore, age matched population in HI were recommended future studies. Increasing the number of participants and comparison of age-matched groups also recommended to generalize these result.Keywords: lower limb loss, amputee, gait variability, gait analyses
Procedia PDF Downloads 280202 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 101201 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model
Authors: Mohammad Zamani, Ramin Mansouri
Abstract:
Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.Keywords: circular vertical, spillway, numerical model, boundary conditions
Procedia PDF Downloads 88200 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products
Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola
Abstract:
The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.Keywords: decision making, design euristics, product design, product design process, design paradigms
Procedia PDF Downloads 119199 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 231198 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 137197 In-Plume H₂O, CO₂, H₂S and SO₂ in the Fumarolic Field of La Fossa Cone (Vulcano Island, Aeolian Archipelago)
Authors: Cinzia Federico, Gaetano Giudice, Salvatore Inguaggiato, Marco Liuzzo, Maria Pedone, Fabio Vita, Christoph Kern, Leonardo La Pica, Giovannella Pecoraino, Lorenzo Calderone, Vincenzo Francofonte
Abstract:
The periods of increased fumarolic activity at La Fossa volcano have been characterized, since early 80's, by changes in the gas chemistry and in the output rate of fumaroles. Excepting the direct measurements of the steam output from fumaroles performed from 1983 to 1995, the mass output of the single gas species has been recently measured, with various methods, only sporadically or for short periods. Since 2008, a scanning DOAS system is operating in the Palizzi area for the remote measurement of the in-plume SO₂ flux. On these grounds, the need of a cross-comparison of different methods for the in situ measurement of the output rate of different gas species is envisaged. In 2015, two field campaigns have been carried out, aimed at: 1. The mapping of the concentration of CO₂, H₂S and SO₂ in the fumarolic plume at 1 m from the surface, by using specific open-path diode tunable lasers (GasFinder Boreal Europe Ltd.) and an Active DOAS for SO₂, respectively; these measurements, coupled to simultaneous ultrasonic wind speed and meteorological data, have been elaborated to obtain the dispersion map and the output rate of single species in the overall fumarolic field; 2. The mapping of the concentrations of CO₂, H₂S, SO₂, H₂O in the fumarolic plume at 0.5 m from the soil, by using an integrated system, including IR spectrometers and specific electrochemical sensors; this has provided the concentration ratios of the analysed gas species and their distribution in the fumarolic field; 3. The in-fumarole sampling of vapour and measurement of the steam output, to validate the remote measurements. The dispersion map of CO₂, obtained from the tunable laser measurements, shows a maximum CO₂ concentration at 1m from the soil of 1000 ppmv along the rim, and 1800 ppmv in the inner slopes. As observed, the largest contribution derives from a wide fumarole of the inner-slope, despite its present outlet temperature of 230°C, almost 200°C lower than those measured at the rim fumaroles. Actually, fumaroles in the inner slopes are among those emitting the largest amount of magmatic vapour and, during the 1989-1991 crisis, reached the temperature of 690°C. The estimated CO₂ and H₂S fluxes are 400 t/d and 4.4 t/d, respectively. The coeval SO₂ flux, measured by the scanning DOAS system, is 9±1 t/d. The steam output, recomputed from CO₂ flux measurements, is about 2000 t/d. The various direct and remote methods (as described at points 1-3) have produced coherent results, which encourage to the use of daily and automatic DOAS SO₂ data, coupled with periodic in-plume measurements of different acidic gases, to obtain the total mass rates.Keywords: DOAS, fumaroles, plume, tunable laser
Procedia PDF Downloads 399196 Movie and Theater Marketing Using the Potentials of Social Networks
Authors: Seyed Reza Naghibulsadat
Abstract:
The nature of communication includes various forms of media productions, which include film and theater. In the current situation, since social networks have emerged, they have brought their own communication capabilities and have features that show speed, public access, lack of media organization and the production of extensive content, and the development of critical thinking; Also, they contain capabilities to develop access to all kinds of media productions, including movies and theater shows; Of course, this works differently in different conditions and communities. In terms of the scale of exploitation, the film has a more general audience, and the theater has a special audience. The film industry is more developed based on more modern technologies, but the theater, based on the older ways of communication, contains more intimate and emotional aspects. ; But in general, the main focus is the development of access to movies and theater shows, which is emphasized by those involved in this field due to the capabilities of social networks. In this research, we will look at these 2 areas and the relevant components for both areas through social networks and also the common points of both types of media production. The main goal of this research is to know the strengths and weaknesses of using social networks for the marketing of movies and theater shows and, at the same time are, also considered the opportunities and threats of this field. The attractions of these two types of media production, with the emergence of social networks, and the ability to change positions, can provide the opportunity to become a media with greater exploitation and higher profitability; But the main consideration is the opinions about these capabilities and the ability to use them for film and theater marketing. The main question of the research is, what are the marketing components for movies and theaters using social media capabilities? What are its strengths and weaknesses? And what opportunities and threats are facing this market? This research has been done with two methods SWOT and meta-analysis. Non-probability sampling has been used with purposeful technique. The results show that a recent approach is an approach based on eliminating threats and weaknesses and emphasizing strengths, and exploiting opportunities in the direction of developing film and theater marketing based on the capabilities of social networks within the framework of local cultural values and presenting achievements on an international scale or It is universal. This introduction leads to the introduction of authentic Iranian culture and foreign enthusiasts in the framework of movies and theater art. Therefore, for this issue, the model for using the capabilities of social networks for movie or theater marketing, according to the results obtained from Respondents, is a model based on SO strategies and, in other words, offensive strategies so that it can take advantage of the internal strengths and made maximum use of foreign situations and opportunities to develop the use of movies and theater performances.Keywords: marketing, movies, theatrical show, social network potentials
Procedia PDF Downloads 77195 Marine Environmental Monitoring Using an Open Source Autonomous Marine Surface Vehicle
Authors: U. Pruthviraj, Praveen Kumar R. A. K. Athul, K. V. Gangadharan, S. Rao Shrikantha
Abstract:
An open source based autonomous unmanned marine surface vehicle (UMSV) is developed for some of the marine applications such as pollution control, environmental monitoring and thermal imaging. A double rotomoulded hull boat is deployed which is rugged, tough, quick to deploy and moves faster. It is suitable for environmental monitoring, and it is designed for easy maintenance. A 2HP electric outboard marine motor is used which is powered by a lithium-ion battery and can also be charged from a solar charger. All connections are completely waterproof to IP67 ratings. In full throttle speed, the marine motor is capable of up to 7 kmph. The motor is integrated with an open source based controller using cortex M4F for adjusting the direction of the motor. This UMSV can be operated by three modes: semi-autonomous, manual and fully automated. One of the channels of a 2.4GHz radio link 8 channel transmitter is used for toggling between different modes of the USMV. In this electric outboard marine motor an on board GPS system has been fitted to find the range and GPS positioning. The entire system can be assembled in the field in less than 10 minutes. A Flir Lepton thermal camera core, is integrated with a 64-bit quad-core Linux based open source processor, facilitating real-time capturing of thermal images and the results are stored in a micro SD card which is a data storage device for the system. The thermal camera is interfaced to an open source processor through SPI protocol. These thermal images are used for finding oil spills and to look for people who are drowning at low visibility during the night time. A Real Time clock (RTC) module is attached with the battery to provide the date and time of thermal images captured. For the live video feed, a 900MHz long range video transmitter and receiver is setup by which from a higher power output a longer range of 40miles has been achieved. A Multi-parameter probe is used to measure the following parameters: conductivity, salinity, resistivity, density, dissolved oxygen content, ORP (Oxidation-Reduction Potential), pH level, temperature, water level and pressure (absolute).The maximum pressure it can withstand 160 psi, up to 100m. This work represents a field demonstration of an open source based autonomous navigation system for a marine surface vehicle.Keywords: open source, autonomous navigation, environmental monitoring, UMSV, outboard motor, multi-parameter probe
Procedia PDF Downloads 242194 Upward Spread Forced Smoldering Phenomenon: Effects and Applications
Authors: Akshita Swaminathan, Vinayak Malhotra
Abstract:
Smoldering is one of the most persistent types of combustion which can take place for very long periods (hours, days, months) if there is an abundance of fuel. It causes quite a notable number of accidents and is one of the prime suspects for fire and safety hazards. It can be ignited with weaker ignition and is more difficult to suppress than flaming combustion. Upward spread smoldering is the case in which the air flow is parallel to the direction of the smoldering front. This type of smoldering is quite uncontrollable, and hence, there is a need to study this phenomenon. As compared to flaming combustion, a smoldering phenomenon often goes unrecognised and hence is a cause for various fire accidents. A simplified experimental setup was raised to study the upward spread smoldering, its effects due to varying forced flow and its effects when it takes place in the presence of external heat sources and alternative energy sources such as acoustic energy. Linear configurations were studied depending on varying forced flow effects on upward spread smoldering. Effect of varying forced flow on upward spread smoldering was observed and studied: (i) in the presence of external heat source (ii) in the presence of external alternative energy sources (acoustic energy). The role of ash removal was observed and studied. Results indicate that upward spread forced smoldering was affected by various key controlling parameters such as the speed of the forced flow, surface orientation, interspace distance (distance between forced flow and the pilot fuel). When an external heat source was placed on either side of the pilot fuel, it was observed that the smoldering phenomenon was affected. The surface orientation and interspace distance between the external heat sources and the pilot fuel were found to play a huge role in altering the regression rate. Lastly, by impinging an alternative energy source in the form of acoustic energy on the smoldering front, it was observed that varying frequencies affected the smoldering phenomenon in different ways. The surface orientation also played an important role. This project highlights the importance of fire and safety hazard and means of better combustion for all kinds of scientific research and practical applications. The knowledge acquired from this work can be applied to various engineering systems ranging from aircrafts, spacecrafts and even to buildings fires, wildfires and help us in better understanding and hence avoiding such widespread fires. Various fire disasters have been recorded in aircrafts due to small electric short circuits which led to smoldering fires. These eventually caused the engine to catch fire that cost damage to life and property. Studying this phenomenon can help us to control, if not prevent, such disasters.Keywords: alternative energy sources, flaming combustion, ignition, regression rate, smoldering
Procedia PDF Downloads 145193 God, The Master Programmer: The Relationship Between God and Computers
Authors: Mohammad Sabbagh
Abstract:
Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.Keywords: programming, the Quran, object orientation, computers and humans, GOD
Procedia PDF Downloads 107192 Music as Source Domain: A Cross-Linguistic Exploration of Conceptual Metaphors
Authors: Eleanor Sweeney, Chunyuan Di
Abstract:
The metaphors people use in everyday discourse do not arise randomly; rather, they develop from our physical experiences in our social and cultural environments. Conceptual Metaphor Theory (CMT) explains that through metaphor, we apply our embodied understanding of the physical world to non-material concepts to understand and express abstract concepts. Our most productive source domains derive from our embodied understanding and allow us to develop primary metaphors, and from primary metaphors, an elaborate, creative world of culturally constructed complex metaphors. Cognitive Linguistics researchers draw upon individual embodied experience for primary metaphors. Socioculturally embodied experience through music has long furnished linguistic expressions in diverse languages, as conceptual metaphors or everyday expressions. Can a socially embodied experience function in the same way as an individually embodied experience in the creation of conceptual metaphors? The authors argue that since music is inherently social and embodied, musical experiences function as a richly motivated source domain. The focus of this study is socially embodied musical experience which is then reflected and expressed through metaphors. This cross-linguistic study explores music as a source domain for metaphors of social alignment in English, French, and Chinese. The authors explored two public discourse sites, Facebook and Linguée, in order to collect linguistic metaphors from three different languages. By conducting this cross-linguistic study, cross-cultural similarities and differences in metaphors for which music is the source domain can be examined. Different musical elements, such as melody, speed, rhythm and harmony, are analyzed for their possible metaphoric meanings of social alignment. Our findings suggest that the general metaphor cooperation is music is a productive metaphor with some subcases, and that correlated social behaviors can be metaphorically expressed with certain elements in music. For example, since performance is a subset of the category behavior, there is a natural mapping from performance in music to behavior in social settings: social alignment is musical performance. Musical performance entails a collective social expectation that exerts control over individual behavior. When individual behavior does not align with the collective social expectation, music-related expressions are often used to express how the individual is violating social norms. Moreover, when individuals do align their behavior with social norms, similar musical expressions are used. Cooperation is a crucial social value in all cultures, indeed it is a key element of survival, and music provides a coherent, consistent, and rich source domain—one based upon a universal and definitive cultural practice.Keywords: Chinese, Conceptual Metaphor Theory, cross-linguistic, culturally embodied experience, English, French, metaphor, music
Procedia PDF Downloads 173191 Public-Private Partnership for Critical Infrastructure Resilience
Authors: Anjula Negi, D. T. V. Raghu Ramaswamy, Rajneesh Sareen
Abstract:
Road infrastructure is emphatically one of the top most critical infrastructure to the Indian economy. Road network in the country of around 3.3 million km is the second largest in the world. Nationwide statistics released by Ministry of Road, Transport and Highways reveal that every minute an accident happens and one death every 3.7 minutes. This reported scale in terms of safety is a matter of grave concern, and economically represents a national loss of 3% to the GDP. Union Budget 2016-17 has allocated USD 12 billion annually for development and strengthening of roads, an increase of 56% from last year. Thus, highlighting the importance of roads as critical infrastructure. National highway alone represent only 1.7% of the total road linkages, however, carry over 40% of traffic. Further, trends analysed from 2002 -2011 on national highways, indicate that in less than a decade, a 22 % increase in accidents have been reported, but, 68% increase in death fatalities. Paramount inference is that accident severity has increased with time. Over these years many measures to increase road safety, lessening damage to physical assets, reducing vulnerabilities leading to a build-up for resilient road infrastructure have been taken. In the context of national highway development program, policy makers proposed implementation of around 20 % of such road length on PPP mode. These roads were taken up on high-density traffic considerations and for qualitative implementation. In order to understand resilience impacts and safety parameters, enshrined in various PPP concession agreements executed with the private sector partners, such highway specific projects would be appraised. This research paper would attempt to assess such safety measures taken and the possible reasons behind an increase in accident severity through these PPP case study projects. Delving further on safety features to understand policy measures adopted in these cases and an introspection on reasons of severity, whether an outcome of increased speeds, faulty road design and geometrics, driver negligence, or due to lack of discipline in following lane traffic with increased speed. Assessment exercise would study these aspects hitherto to PPP and post PPP project structures, based on literature review and opinion surveys with sectoral experts. On the way forward, it is understood that the Ministry of Road, Transport and Highway’s estimate for strengthening the national highway network is USD 77 billion within next five years. The outcome of this paper would provide an understanding of resilience measures adopted, possible options for accessible and safe road network and its expansion to policy makers for possible policy initiatives and funding allocation in securing critical infrastructure.Keywords: national highways, policy, PPP, safety
Procedia PDF Downloads 258190 Co-Seismic Deformation Using InSAR Sentinel-1A: Case Study of the 6.5 Mw Pidie Jaya, Aceh, Earthquake
Authors: Jefriza, Habibah Lateh, Saumi Syahreza
Abstract:
The 2016 Mw 6.5 Pidie Jaya earthquake is one of the biggest disasters that has occurred in Aceh within the last five years. This earthquake has caused severe damage to many infrastructures such as schools, hospitals, mosques, and houses in the district of Pidie Jaya and surrounding areas. Earthquakes commonly occur in Aceh Province due to the Aceh-Sumatra is located in the convergent boundaries of the Sunda Plate subducted beneath the Indo-Australian Plate. This convergence is responsible for the intensification of seismicity in this region. The plates are tilted at a speed of 63 mm per year and the right lateral component is accommodated by strike- slip faulting within Sumatra, mainly along the great Sumatran fault. This paper presents preliminary findings of InSAR study aimed at investigating the co-seismic surface deformation pattern in Pidie Jaya, Aceh-Indonesia. Coseismic surface deformation is rapid displacement that occurs at the time of an earthquake. Coseismic displacement mapping is required to study the behavior of seismic faults. InSAR is a powerful tool for measuring Earth surface deformation to a precision of a few centimetres. In this study, two radar images of the same area but at two different times are required to detect changes in the Earth’s surface. The ascending and descending Sentinel-1A (S1A) synthetic aperture radar (SAR) data and Sentinels application platform (SNAP) toolbox were used to generate SAR interferogram image. In order to visualize the InSAR interferometric, the S1A from both master (26 Nov 2016) and slave data-sets (26 Dec 2016) were utilized as the main data source for mapping the coseismic surface deformation. The results show that the fringes of phase difference have appeared in the border region as a result of the movement that was detected with interferometric technique. On the other hand, the dominant fringes pattern also appears near the coastal area, this is consistent with the field investigations two days after the earthquake. However, the study has also limitations of resolution and atmospheric artefacts in SAR interferograms. The atmospheric artefacts are caused by changes in the atmospheric refractive index of the medium, as a result, has limitation to produce coherence image. Low coherence will be affected the result in creating fringes (movement can be detected by fringes). The spatial resolution of the Sentinel satellite has not been sufficient for studying land surface deformation in this area. Further studies will also be investigated using both ALOS and TerraSAR-X. ALOS and TerraSAR-X improved the spatial resolution of SAR satellite.Keywords: earthquake, InSAR, interferometric, Sentinel-1A
Procedia PDF Downloads 197189 Cloud Based Supply Chain Traceability
Authors: Kedar J. Mahadeshwar
Abstract:
Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.Keywords: cloud, pharmaceutical, supply chain, tracking
Procedia PDF Downloads 528188 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator
Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur
Abstract:
Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature
Procedia PDF Downloads 110187 Hybrid Manufacturing System to Produce 3D Structures for Osteochondral Tissue Regeneration
Authors: Pedro G. Morouço
Abstract:
One utmost challenge in Tissue Engineering is the production of 3D constructs capable of mimicking the functional hierarchy of native tissues. This is well stated for osteochondral tissue due to the complex mechanical functional unit based on the junction of articular cartilage and bone. Thus, the aim of the present study was to develop a new additive manufacturing system coupling micro-extrusion with hydrogels printing. An integrated system was developed with 2 main features: (i) the printing of up to three distinct hydrogels; (ii) in coordination with the printing of a thermoplastic structural support. The hydrogel printing module was projected with a ‘revolver-like’ system, where the hydrogel selection was made by a rotating mechanism. The hydrogel deposition was then controlled by pressured air input. The use of specific components approved for medical use was incorporated in the material dispensing system (Nordson EDF Optimum® fluid dispensing system). The thermoplastic extrusion modulus enabled the control of required extrusion temperature through electric resistances in the polymer reservoir and the extrusion system. After testing and upgrades, a hydrogel modulus with 3 syringes (3cm3 capacity each), with a pressure range of 0-2.5bar, a rotational speed of 0-5rpm, and working with needles from 200-800µm was obtained. This modulus was successfully coupled to the extrusion system that presented a temperature up to 300˚C, a pressure range of 0-12bar, and working with nozzles from 200-500µm. The applied motor could provide a velocity range 0-2000mm/min. Although, there are distinct printing requirements for hydrogels and polymers, the novel system could develop hybrid scaffolds, combining the 2 moduli. The morphological analysis showed high reliability (n=5) between the theoretical and obtained filament and pore size (350µm and 300µm vs. 342±4µm and 302±3µm, p>0.05, respectively) of the polymer; and multi-material 3D constructs were successfully obtained. Human tissues present very distinct and complex structures regarding their mechanical properties, organization, composition and dimensions. For osteochondral regenerative medicine, a multiphasic scaffold is required as subchondral bone and overlying cartilage must regenerate at the same time. Thus, a scaffold with 3 layers (bone, intermediate and cartilage parts) can be a promising approach. The developed system may give a suitable solution to construct those hybrid scaffolds with enhanced properties. The present novel system is a step-forward regarding osteochondral tissue engineering due to its ability to generate layered mechanically stable implants through the double-printing of hydrogels with thermoplastics.Keywords: 3D bioprinting, bone regeneration, cartilage regeneration, regenerative medicine, tissue engineering
Procedia PDF Downloads 167