Search results for: tool comparison
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9693

Search results for: tool comparison

993 Towards the Development of Uncertainties Resilient Business Model for Driving the Solar Panel Industry in Nigeria Power Sector

Authors: Balarabe Z. Ahmad, Anne-Lorène Vernay

Abstract:

The emergence of electricity in Nigeria was dated back to 1896. The power plants have the potential to generate 12,522 MW of electric power. Whereas current dispatch is about 4,000 MW, access to electrification is about 60%, with consumption at 0.14 MWh/capita. The government embarked on energy reforms to mitigate energy poverty. The reform targeted the provision of electricity access to 75% of the population by 2020 and 90% by 2030. Growth of total electricity demand by a factor of 5 by 2035 had been projected. This means that Nigeria will require almost 530 TWh of electricity which can be delivered through generators with a capacity of 65 GW. Analogously, the geographical location of Nigeria has placed it in an advantageous position as the source of solar energy; the availability of a high sunshine belt is obvious in the country. The implication is that the far North, where energy poverty is high, equally has about twice the solar radiation as against southern Nigeria. Hence, the chance of generating solar electricity is 66% possible at 11850 x 103 GWh per year, which is one hundred times the current electricity consumption rate in the country. Harvesting these huge potentials may be a mirage if the entrepreneurs in the solar panel business are left with the conventional business models that are not uncertainty resilient. Currently, business entities in RE in Nigeria are uncertain of; accessing the national grid, purchasing potentials of cooperating organizations, currency fluctuation and interest rate increases. Uncertainties such as the security of projects and government policy are issues entrepreneurs must navigate to remain sustainable in the solar panel industry in Nigeria. The aim of this paper is to identify how entrepreneurial firms consider uncertainties in developing workable business models for commercializing solar energy projects in Nigeria. In an attempt to develop a novel business model, the paper investigated how entrepreneurial firms assess and navigate uncertainties. The roles of key stakeholders in helping entrepreneurs to manage uncertainties in the Nigeria RE sector were probed in the ongoing study. The study explored empirical uncertainties that are peculiar to RE entrepreneurs in Nigeria. A mixed-mode of research was embraced using qualitative data from face-to-face interviews conducted on the Solar Energy Entrepreneurs and the experts drawn from key stakeholders. Content analysis of the interview was done using Atlas. It is a nine qualitative tool. The result suggested that all stakeholders are required to synergize in developing an uncertainty resilient business model. It was opined that the RE entrepreneurs need modifications in the business recommendations encapsulated in the energy policy in Nigeria to strengthen their capability in delivering solar energy solutions to the yawning Nigerians.

Keywords: uncertainties, entrepreneurial, business model, solar-panel

Procedia PDF Downloads 140
992 A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages

Authors: Olivia A. Wilson, Hannah E. Power, Murray Kendall

Abstract:

Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.

Keywords: emergency management, sydney, tide-tsunami interaction, tsunami impact

Procedia PDF Downloads 235
991 Application of the State of the Art of Hydraulic Models to Manage Coastal Problems, Case Study: The Egyptian Mediterranean Coast Model

Authors: Alsayed Ibrahim Diwedar, Ahmed ElKut, Mohamed Yossef

Abstract:

Coastal problems are stressing the coastal environment due to its complexity. The dynamic interaction between the sea and the land results in serious problems that threaten coastal areas worldwide, in addition to human interventions and activities. This makes the coastal environment highly vulnerable to natural processes like flooding, erosion, and the impact of human activities as pollution. Protecting and preserving this vulnerable coastal zone with its valuable ecosystems calls for addressing the coastal problems. This, in the end, will support the sustainability of the coastal communities and maintain the current and future generations. Consequently applying suitable management strategies and sustainable development that consider the unique characteristics of the coastal system is a must. The coastal management philosophy aims to solve the conflicts of interest between human development activities and this dynamic nature. Modeling emerges as a successful tool that provides support to decision-makers, engineers, and researchers for better management practices. Modeling tools proved that they are accurate and reliable in prediction. With its capability to integrate data from various sources such as bathymetric surveys, satellite images, and meteorological data, it offers the possibility for engineers and scientists to understand this complex dynamic system and get in-depth into the interaction between both the natural and human-induced factors. Enabling decision makers to make informed choices and develop effective strategies for sustainable development and risk mitigation. The application of modeling tools supports the evaluation of various scenarios by affording the possibility to simulate and forecast different coastal processes from the hydrodynamic and wave actions and the resulting flooding and erosion. The state-of-the-art application of modeling tools in coastal management allows for better understanding and predicting coastal processes, optimizing infrastructure planning and design, supporting ecosystem-based approaches, assessing climate change impacts, managing hazards, and finally facilitating stakeholder engagement. This paper emphasizes the role of hydraulic models in enhancing the management of coastal problems by discussing the diverse applications of modeling in coastal management. It highlights the modelling role in understanding complex coastal processes, and predicting outcomes. The importance of informing decision-makers with modeling results which gives technical and scientific support to achieve sustainable coastal development and protection.

Keywords: coastal problems, coastal management, hydraulic model, numerical model, physical model

Procedia PDF Downloads 16
990 The Potential Involvement of Platelet Indices in Insulin Resistance in Morbid Obese Children

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Association between insulin resistance (IR) and hematological parameters has long been a matter of interest. Within this context, body mass index (BMI), red blood cells, white blood cells and platelets were involved in this discussion. Parameters related to platelets associated with IR may be useful indicators for the identification of IR. Platelet indices such as mean platelet volume (MPV), platelet distribution width (PDW) and plateletcrit (PCT) are being questioned for their possible association with IR. The aim of this study was to investigate the association between platelet (PLT) count as well as PLT indices and the surrogate indices used to determine IR in morbid obese (MO) children. A total of 167 children participated in the study. Three groups were constituted. The number of cases was 34, 97 and 36 children in the normal BMI, MO and metabolic syndrome (MetS) groups, respectively. Sex- and age-dependent BMI-based percentile tables prepared by World Health Organization were used for the definition of morbid obesity. MetS criteria were determined. BMI values, homeostatic model assessment for IR (HOMA-IR), alanine transaminase-to-aspartate transaminase ratio (ALT/AST) and diagnostic obesity notation model assessment laboratory (DONMA-lab) index values were computed. PLT count and indices were analyzed using automated hematology analyzer. Data were collected for statistical analysis using SPSS for Windows. Arithmetic mean and standard deviation were calculated. Mean values of PLT-related parameters in both control and study groups were compared by one-way ANOVA followed by Tukey post hoc tests to determine whether a significant difference exists among the groups. The correlation analyses between PLT as well as IR indices were performed. Statistically significant difference was accepted as p-value < 0.05. Increased values were detected for PLT (p < 0.01) and PCT (p > 0.05) in MO group compared to those observed in children with N-BMI. Significant increases for PLT (p < 0.01) and PCT (p < 0.05) were observed in MetS group in comparison with the values obtained in children with N-BMI (p < 0.01). Significantly lower MPV and PDW values were obtained in MO group compared to the control group (p < 0.01). HOMA-IR (p < 0.05), DONMA-lab index (p < 0.001) and ALT/AST (p < 0.001) values in MO and MetS groups were significantly increased compared to the N-BMI group. On the other hand, DONMA-lab index values also differed between MO and MetS groups (p < 0.001). In the MO group, PLT was negatively correlated with MPV and PDW values. These correlations were not observed in the N-BMI group. None of the IR indices exhibited a correlation with PLT and PLT indices in the N-BMI group. HOMA-IR showed significant correlations both with PLT and PCT in the MO group. All of the three IR indices were well-correlated with each other in all groups. These findings point out the missing link between IR and PLT activation. In conclusion, PLT and PCT may be related to IR in addition to their identities as hemostasis markers during morbid obesity. Our findings have suggested that DONMA-lab index appears as the best surrogate marker for IR due to its discriminative feature between morbid obesity and MetS.

Keywords: children, insulin resistance, metabolic syndrome, plateletcrit, platelet indices

Procedia PDF Downloads 102
989 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 166
988 Understanding Evidence Dispersal Caused by the Effects of Using Unmanned Aerial Vehicles in Active Indoor Crime Scenes

Authors: Elizabeth Parrott, Harry Pointon, Frederic Bezombes, Heather Panter

Abstract:

Unmanned aerial vehicles (UAV’s) are making a profound effect within policing, forensic and fire service procedures worldwide. These intelligent devices have already proven useful in photographing and recording large-scale outdoor and indoor sites using orthomosaic and three-dimensional (3D) modelling techniques, for the purpose of capturing and recording sites during and post-incident. UAV’s are becoming an established tool as they are extending the reach of the photographer and offering new perspectives without the expense and restrictions of deploying full-scale aircraft. 3D reconstruction quality is directly linked to the resolution of captured images; therefore, close proximity flights are required for more detailed models. As technology advances deployment of UAVs in confined spaces is becoming more common. With this in mind, this study investigates the effects of UAV operation within active crimes scenes with regard to the dispersal of particulate evidence. To date, there has been little consideration given to the potential effects of using UAV’s within active crime scenes aside from a legislation point of view. Although potentially the technology can reduce the likelihood of contamination by replacing some of the roles of investigating practitioners. There is the risk of evidence dispersal caused by the effect of the strong airflow beneath the UAV, from the downwash of the propellers. The initial results of this study are therefore presented to determine the height of least effect at which to fly, and the commercial propeller type to choose to generate the smallest amount of disturbance from the dataset tested. In this study, a range of commercially available 4-inch propellers were chosen as a starting point due to the common availability and their small size makes them well suited for operation within confined spaces. To perform the testing, a rig was configured to support a single motor and propeller powered with a standalone mains power supply and controlled via a microcontroller. This was to mimic a complete throttle cycle and control the device to ensure repeatability. By removing the variances of battery packs and complex UAV structures to allow for a more robust setup. Therefore, the only changing factors were the propeller and operating height. The results were calculated via computer vision analysis of the recorded dispersal of the sample particles placed below the arm-mounted propeller. The aim of this initial study is to give practitioners an insight into the technology to use when operating within confined spaces as well as recognizing some of the issues caused by UAV’s within active crime scenes.

Keywords: dispersal, evidence, propeller, UAV

Procedia PDF Downloads 158
987 An Effective Modification to Multiscale Elastic Network Model and Its Evaluation Based on Analyses of Protein Dynamics

Authors: Weikang Gong, Chunhua Li

Abstract:

Dynamics plays an essential role in function exertion of proteins. Elastic network model (ENM), a harmonic potential-based and cost-effective computational method, is a valuable and efficient tool for characterizing the intrinsic dynamical properties encoded in biomacromolecule structures and has been widely used to detect the large-amplitude collective motions of proteins. Gaussian network model (GNM) and anisotropic network model (ANM) are the two often-used ENM models. In recent years, many ENM variants have been proposed. Here, we propose a small but effective modification (denoted as modified mENM) to the multiscale ENM (mENM) where fitting weights of Kirchhoff/Hessian matrixes with the least square method (LSM) is modified since it neglects the details of pairwise interactions. Then we perform its comparisons with the original mENM, traditional ENM, and parameter-free ENM (pfENM) on reproducing dynamical properties for the six representative proteins whose molecular dynamics (MD) trajectories are available in http://mmb.pcb.ub.es/MoDEL/. In the results, for B-factor prediction, mENM achieves the best performance among the four ENM models. Additionally, it is noted that with the weights of the multiscale Kirchhoff/Hessian matrixes modified, interestingly, the modified mGNM/mANM still has a much better performance than the corresponding traditional ENM and pfENM models. As to dynamical cross-correlation map (DCCM) calculation, taking the data obtained from MD trajectories as the standard, mENM performs the worst while the results produced by the modified mENM and pfENM models are close to those from MD trajectories with the latter a little better than the former. Generally, ANMs perform better than the corresponding GNMs except for the mENM. Thus, pfANM and the modified mANM, especially the former, have an excellent performance in dynamical cross-correlation calculation. Compared with GNMs (except for mGNM), the corresponding ANMs can capture quite a number of positive correlations for the residue pairs nearly largest distances apart, which is maybe due to the anisotropy consideration in ANMs. Furtherly, encouragingly the modified mANM displays the best performance in capturing the functional motional modes, followed by pfANM and traditional ANM models, while mANM fails in all the cases. This suggests that the consideration of long-range interactions is critical for ANM models to produce protein functional motions. Based on the analyses, the modified mENM is a promising method in capturing multiple dynamical characteristics encoded in protein structures. This work is helpful for strengthening the understanding of the elastic network model and provides a valuable guide for researchers to utilize the model to explore protein dynamics.

Keywords: elastic network model, ENM, multiscale ENM, molecular dynamics, parameter-free ENM, protein structure

Procedia PDF Downloads 115
986 Grain Size Statistics and Depositional Pattern of the Ecca Group Sandstones, Karoo Supergroup in the Eastern Cape Province, South Africa

Authors: Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava

Abstract:

Grain size analysis is a vital sedimentological tool used to unravel the hydrodynamic conditions, mode of transportation and deposition of detrital sediments. In this study, detailed grain-size analysis was carried out on thirty-five sandstone samples from the Ecca Group in the Eastern Cape Province of South Africa. Grain-size statistical parameters, bivariate analysis, linear discriminate functions, Passega diagrams and log-probability curves were used to reveal the depositional processes, sedimentation mechanisms, hydrodynamic energy conditions and to discriminate different depositional environments. The grain-size parameters show that most of the sandstones are very fine to fine grained, moderately well sorted, mostly near-symmetrical and mesokurtic in nature. The abundance of very fine to fine grained sandstones indicates the dominance of low energy environment. The bivariate plots that the samples are mostly grouped, except for the Prince Albert samples that show scattered trend, which is due to the either mixture of two modes in equal proportion in bimodal sediments or good sorting in unimodal sediments. The linear discriminant function (LDF) analysis is dominantly indicative of turbidity current deposits under shallow marine environments for samples from the Prince Albert, Collingham and Ripon Formations, while those samples from the Fort Brown Formation are fluvial (deltaic) deposits. The graphic mean value shows the dominance of fine sand-size particles, which point to relatively low energy conditions of deposition. In addition, the LDF results point to low energy conditions during the deposition of the Prince Albert, Collingham and part of the Ripon Formation (Pluto Vale and Wonderfontein Shale Members), whereas the Trumpeters Member of the Ripon Formation and the overlying Fort Brown Formation accumulated under high energy conditions. The CM pattern shows a clustered distribution of sediments in the PQ and QR segments, indicating that the sediments were deposited mostly by suspension and rolling/saltation, and graded suspension. Furthermore, the plots also show that the sediments are mainly deposited by turbidity currents. Visher diagrams show the variability of hydraulic depositional conditions for the Permian Ecca Group sandstones. Saltation is the major process of transportation, although suspension and traction also played some role during deposition of the sediments. The sediments were mainly in saltation and suspension before being deposited.

Keywords: grain size analysis, hydrodynamic condition, depositional environment, Ecca Group, South Africa

Procedia PDF Downloads 476
985 Analysis of Non-Conventional Roundabout Performance in Mixed Traffic Conditions

Authors: Guneet Saini, Shahrukh, Sunil Sharma

Abstract:

Traffic congestion is the most critical issue faced by those in the transportation profession today. Over the past few years, roundabouts have been recognized as a measure to promote efficiency at intersections globally. In developing countries like India, this type of intersection still faces a lot of issues, such as bottleneck situations, long queues and increased waiting times, due to increasing traffic which in turn affect the performance of the entire urban network. This research is a case study of a non-conventional roundabout, in terms of geometric design, in a small town in India. These types of roundabouts should be analyzed for their functionality in mixed traffic conditions, prevalent in many developing countries. Microscopic traffic simulation is an effective tool to analyze traffic conditions and estimate various measures of operational performance of intersections such as capacity, vehicle delay, queue length and Level of Service (LOS) of urban roadway network. This study involves analyzation of an unsymmetrical non-circular 6-legged roundabout known as “Kala Aam Chauraha” in a small town Bulandshahr in Uttar Pradesh, India using VISSIM simulation package which is the most widely used software for microscopic traffic simulation. For coding in VISSIM, data are collected from the site during morning and evening peak hours of a weekday and then analyzed for base model building. The model is calibrated on driving behavior and vehicle parameters and an optimal set of calibrated parameters is obtained followed by validation of the model to obtain the base model which can replicate the real field conditions. This calibrated and validated model is then used to analyze the prevailing operational traffic performance of the roundabout which is then compared with a proposed alternative to improve efficiency of roundabout network and to accommodate pedestrians in the geometry. The study results show that the alternative proposed is an advantage over the present roundabout as it considerably reduces congestion, vehicle delay and queue length and hence, successfully improves roundabout performance without compromising on pedestrian safety. The study proposes similar designs for modification of existing non-conventional roundabouts experiencing excessive delays and queues in order to improve their efficiency especially in the case of developing countries. From this study, it can be concluded that there is a need to improve the current geometry of such roundabouts to ensure better traffic performance and safety of drivers and pedestrians negotiating the intersection and hence this proposal may be considered as a best fit.

Keywords: operational performance, roundabout, simulation, VISSIM

Procedia PDF Downloads 136
984 Design of Photonic Crystal with Defect Layer to Eliminate Interface Corrugations for Obtaining Unidirectional and Bidirectional Beam Splitting under Normal Incidence

Authors: Evrim Colak, Andriy E. Serebryannikov, Pavel V. Usik, Ekmel Ozbay

Abstract:

Working with a dielectric photonic crystal (PC) structure which does not include surface corrugations, unidirectional transmission and dual-beam splitting are observed under normal incidence as a result of the strong diffractions caused by the embedded defect layer. The defect layer has twice the period of the regular PC segments which sandwich the defect layer. Although the PC has even number of rows, the structural symmetry is broken due to the asymmetric placement of the defect layer with respect to the symmetry axis of the regular PC. The simulations verify that efficient splitting and occurrence of strong diffractions are related to the dispersion properties of the Floquet-Bloch modes of the photonic crystal. Unidirectional and bi-directional splitting, which are associated with asymmetric transmission, arise due to the dominant contribution of the first positive and first negative diffraction orders. The effect of the depth of the defect layer is examined by placing single defect layer in varying rows, preserving the asymmetry of PC. Even for deeply buried defect layer, asymmetric transmission is still valid even if the zeroth order is not coupled. This transmission is due to evanescent waves which reach to the deeply embedded defect layer and couple to higher order modes. In an additional selected performance, whichever surface is illuminated, i.e., in both upper and lower surface illumination cases, incident beam is split into two beams of equal intensity at the output surface where the intensity of the out-going beams are equal for both illumination cases. That is, although the structure is asymmetric, symmetric bidirectional transmission with equal transmission values is demonstrated and the structure mimics the behavior of symmetric structures. Finally, simulation studies including the examination of a coupled-cavity defect for two different permittivity values (close to the permittivity values of GaAs or Si and alumina) reveal unidirectional splitting for a wider band of operation in comparison to the bandwidth obtained in the case of a single embedded defect layer. Since the dielectric materials that are utilized are low-loss and weakly dispersive in a wide frequency range including microwave and optical frequencies, the studied structures should be scalable to the mentioned ranges.

Keywords: asymmetric transmission, beam deflection, blazing, bi-directional splitting, defect layer, dual beam splitting, Floquet-Bloch modes, isofrequency contours, line defect, oblique incidence, photonic crystal, unidirectionality

Procedia PDF Downloads 179
983 Thinking for Writing: Evidence of Language Transfer in Chinese ESL Learners’ Written Narratives

Authors: Nan Yang, Hye Pae

Abstract:

English as a second language (ESL) learners are often observed to have transferred traits of their first languages (L1) and habits of using their L1s to their use of English (second language, L2), and this phenomenon is coined as language transfer. In addition to the transfer of linguistic features (e.g., grammar, vocabulary, etc.), which are relatively easy to observe and quantify, many cross-cultural theorists emphasized on a much subtle and fundamental transfer existing on a higher conceptual level that is referred to as conceptual transfer. Although a growing body of literature in linguistics has demonstrated evidence of L1 transfer in various discourse genres, very limited studies address the underlying conceptual transfer that is happening along with the language transfer, especially with the extended form of spontaneous discourses such as personal narrative. To address this issue, this study situates itself in the context of Chinese ESL learners’ written narratives, examines evidence of L1 conceptual transfer in comparison with native English speakers’ narratives, and provides discussion from the perspective of the conceptual transfer. It is hypothesized that Chinese ESL learners’ English narrative strategies are heavily influenced by the strategies that they use in Chinese as a result of the conceptual transfer. Understanding language transfer cognitively is of great significance in the realm of SLA, as it helps address challenges that ESL learners around the world are facing; allow native English speakers to develop a better understanding about how and why learners’ English is different; and also shed light in ESL pedagogy by providing linguistic and cultural expectations in native English-speaking countries. To achieve the goals, 40 college students were recruited (20 Chinese ESL learners and 20 native English speakers) in the United States, and their written narratives on the prompt 'The most frightening experience' were collected for quantitative discourse analysis. 40 written narratives (20 in Chinese and 20 in English) were collected from Chinese ESL learners, and 20 written narratives were collected from native English speakers. All written narratives were coded according to the coding scheme developed by the authors prior to data collection. Statistical descriptive analyses were conducted, and the preliminary results revealed that native English speakers included more narrative elements such as events and explicit evaluation comparing to Chinese ESL students’ both English and Chinese writings; the English group also utilized more evaluation device (i.e., physical state expressions, indirectly reported speeches, delineation) than Chinese ESL students’ both English and Chinese writings. It was also observed that Chinese ESL students included more orientation elements (i.e., the introduction of time/place, the introduction of character) in their Chinese and English writings than the native English-speaking participants. The findings suggest that a similar narrative strategy was observed in Chinese ESL learners’ Chinese narratives and English narratives, which is considered as the evidence of conceptual transfer from Chinese (L1) to English (L2). The results also indicate that distinct narrative strategies were used by Chinese ESL learners and native English speakers as a result of cross-cultural differences.

Keywords: Chinese ESL learners, language transfer, thinking-for-speaking, written narratives

Procedia PDF Downloads 114
982 A Comparison of Proxemics and Postural Head Movements during Pop Music versus Matched Music Videos

Authors: Harry J. Witchel, James Ackah, Carlos P. Santos, Nachiappan Chockalingam, Carina E. I. Westling

Abstract:

Introduction: Proxemics is the study of how people perceive and use space. It is commonly proposed that when people like or engage with a person/object, they will move slightly closer to it, often quite subtly and subconsciously. Music videos are known to add entertainment value to a pop song. Our hypothesis was that by adding appropriately matched video to a pop song, it would lead to a net approach of the head to the monitor screen compared to simply listening to an audio-only version of the song. Methods: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5 x 27 cm monitor two musical stimuli in a counterbalanced order; all stimuli were based on music videos by the band OK Go: Here It Goes Again (HIGA, boredom ratings (0-100) = 15.00 ± 4.76, mean ± SEM, standard-error-of-the-mean) and Do What You Want (DWYW, boredom ratings = 23.93 ± 5.98), which did not differ in boredom elicited (P = 0.21, rank-sum test). Each participant experienced each song only once, and one song (counterbalanced) as audio-only versus the other song as a music video. The movement was measured by video-tracking using Kinovea 0.8, based on recording from a lateral aspect; before beginning, each participant had a reflective motion tracking marker placed on the outer canthus of the left eye. Analysis of the Kinovea X-Y coordinate output in comma-separated-variables format was performed in Matlab, as were non-parametric statistical tests. Results: We found that the audio-only stimuli (combined for both HIGA and DWYW, mean ± SEM, 35.71 ± 5.36) were significantly more boring than the music video versions (19.46 ± 3.83, P = 0.0066 Wilcoxon Signed Rank Test (WSRT), Cohen's d = 0.658, N = 28). We also found that participants' heads moved around twice as much during the audio-only versions (speed = 0.590 ± 0.095 mm/sec) compared to the video versions (0.301 ± 0.063 mm/sec, P = 0.00077, WSRT). However, the participants' mean head-to-screen distances were not detectably smaller (i.e. head closer to the screen) during the music videos (74.4 ± 1.8 cm) compared to the audio-only stimuli (73.9 ± 1.8 cm, P = 0.37, WSRT). If anything, during the audio-only condition, they were slightly closer. Interestingly, the ranges of the head-to-screen distances were smaller during the music video (8.6 ± 1.4 cm) compared to the audio-only (12.9 ± 1.7 cm, P = 0.0057, WSRT), the standard deviations were also smaller (P = 0.0027, WSRT), and their heads were held 7 mm higher (video 116.1 ± 0.8 vs. audio-only 116.8 ± 0.8 cm above floor, P = 0.049, WSRT). Discussion: As predicted, sitting and listening to experimenter-selected pop music was more boring than when the music was accompanied by a matched, professionally-made video. However, we did not find that the proxemics of the situation led to approaching the screen. Instead, adding video led to efforts to control the head to a more central and upright viewing position and to suppress head fidgeting.

Keywords: boredom, engagement, music videos, posture, proxemics

Procedia PDF Downloads 164
981 Studying Second Language Learners' Language Behavior from Conversation Analysis Perspective

Authors: Yanyan Wang

Abstract:

This paper on second language teaching and learning uses conversation analysis (CA) approach and focuses on how second language learners of Chinese do repair when making clarification requests. In order to demonstrate their behavior in interaction, a comparison was made to study the differences between native speakers of Chinese with non-native speakers of Chinese. The significance of the research is to make second language teachers and learners aware of repair and how to seek clarification. Utilizing the methodology of CA, the research involved two sets of naturally occurring recordings, one of native speaker students and the other of non-native speaker students. Both sets of recording were telephone talks between students and teachers. There were 50 native speaker students and 50 non-native speaker students. From multiple listening to the recordings, the parts with repairs for clarification were selected for analysis which included the moments in the talk when students had problems in understanding or hearing the speaker and had to seek clarification. For example, ‘Sorry, I do not understand ‘and ‘Can you repeat the question? ‘were the parts as repair to make clarification requests. In the data, there were 43 such cases from native speaker students and 88 cases from non-native speaker students. The non-native speaker students were more likely to use repair to seek clarification. Analysis on how the students make clarification requests during their conversation was carried out by investigating how the students initiated problems and how the teachers repaired the problems. In CA term, it is called other-initiated self-repair (OISR), which refers to student-initiated teacher-repair in this research. The findings show that, in initiating repair, native speaker students pay more attention to mutual understanding (inter-subjectivity) while non-native speaker students, due to their lack of language proficiency, pay more attention to their status of knowledge (epistemic) switch. There are three major differences: 1, native Chinese students more often initiate closed-class OISR (seeking specific information in the request) such as repeating a word or phrases from the previous turn while non-native students more frequently initiate open-class OISR (not specifying clarification) such as ‘sorry, I don’t understand ‘. 2, native speakers’ clarification requests are treated by the teacher as understanding of the content while non-native learners’ clarification requests are treated by teacher as language proficiency problem. 3, native speakers don’t see repair as knowledge issue and there is no third position in the repair sequences to close repair while non-native learners take repair sequence as a time to adjust their knowledge. There is clear closing third position token such as ‘oh ‘ to close repair sequence so that the topic can go back. In conclusion, this paper uses conversation analysis approach to compare differences between native Chinese speakers and non-native Chinese learners in their ways of conducting repair when making clarification requests. The findings are useful in future Chinese language teaching and learning, especially in teaching pragmatics such as requests.

Keywords: conversation analysis (CA), clarification request, second language (L2), teaching implication

Procedia PDF Downloads 252
980 Optimization of the Administration of Intravenous Medication by Reduction of the Residual Volume, Taking User-Friendliness, Cost Efficiency, and Safety into Account

Authors: A. Poukens, I. Sluyts, A. Krings, J. Swartenbroekx, D. Geeroms, J. Poukens

Abstract:

Introduction and Objectives: It has been known for many years that with the administration of intravenous medication, a rather significant part of the planned to be administered infusion solution, the residual volume ( the volume that remains in the IV line and or infusion bag), does not reach the patient and is wasted. This could possibly result in under dosage and diminished therapeutic effect. Despite the important impact on the patient, the reduction of residual volume lacks attention. An optimized and clearly stated protocol concerning the reduction of residual volume in an IV line is necessary for each hospital. As described in my Master’s thesis, acquiring the degree of Master in Hospital Pharmacy, administration of intravenous medication can be optimized by reduction of the residual volume. Herewith effectiveness, user-friendliness, cost efficiency and safety were taken into account. Material and Methods: By usage of a literature study and an online questionnaire sent out to all Flemish hospitals and hospitals in the Netherlands (province Limburg), current flush methods could be mapped out. In laboratory research, possible flush methods aiming to reduce the residual volume were measured. Furthermore, a self-developed experimental method to reduce the residual volume was added to the study. The current flush methods and the self-developed experimental method were compared to each other based on cost efficiency, user-friendliness and safety. Results: There is a major difference between the Flemish and the hospitals in the Netherlands (Province Limburg) concerning the approach and method of flushing IV lines after administration of intravenous medication. The residual volumes were measured and laboratory research showed that if flushing was done minimally 1-time equivalent to the residual volume, 95 percent of glucose would be flushed through. Based on the comparison, it became clear that flushing by use of a pre-filled syringe would be the most cost-efficient, user-friendly and safest method. According to laboratory research, the self-developed experimental method is feasible and has the advantage that the remaining fraction of the medication can be administered to the patient in unchanged concentration without dilution. Furthermore, this technique can be applied regardless of the level of the residual volume. Conclusion and Recommendations: It is recommendable to revise the current infusion systems and flushing methods in most hospitals. Aside from education of the hospital staff and alignment on a uniform substantiated protocol, an optimized and clear policy on the reduction of residual volume is necessary for each hospital. It is recommended to flush all IV lines with rinsing fluid with at least the equivalent volume of the residual volume. Further laboratory and clinical research for the self-developed experimental method are needed before this method can be implemented clinically in a broader setting.

Keywords: intravenous medication, infusion therapy, IV flushing, residual volume

Procedia PDF Downloads 128
979 Considering International/Local Peacebuilding Partnerships: The Stoplights Analysis System

Authors: Charles Davidson

Abstract:

This paper presents the Stoplight Analysis System of Partnering Organizations Readiness, offering a structured framework to evaluate conflict resolution collaboration feasibility, especially crucial in conflict areas, employing a colour-coded approach and specific assessment points, with implications for more informed decision-making and improved outcomes in peacebuilding initiatives. Derived from at total of 40 years of practical peacebuilding experience from the project’s two researchers as well as interviews of various other peacebuilding actors, this paper introduces the Stoplight Analysis System of Partnering Organizations Readiness, a comprehensive framework designed to facilitate effective collaboration in international/local peacebuilding partnerships by evaluating the readiness of both potential partner organisations and the location of the proposed project. ^The system employs a colour-coded approach, categorising potential partnerships into three distinct indicators: Red (no-go), Yellow (requires further research), and Green (promising, go ahead). Within each category, specific points are identified for assessment, guiding decision-makers in evaluating the feasibility and potential success of collaboration. The Red category signals significant barriers, prompting an immediate stoppage in the consideration of partnership. The Yellow category encourages deeper investigation to determine whether potential issues can be mitigated, while the Green category signifies organisations deemed ready for collaboration. This systematic and structured approach empowers decision-makers to make informed choices, enhancing the likelihood of successful and mutually beneficial partnerships. Methodologically, this paper utilised interviews from peacebuilders from around the globe, scholarly research of extant strategies, and a collaborative review of programming from the project’s two authors from their own time in the field. This method as a formalised model has been employed for the past two years across a litany of partnership considerations, and has been adjusted according to its field experimentation. This research holds significant importance in the field of conflict resolution as it provides a systematic and structured approach to peacebuilding partnership evaluation. In conflict-affected regions, where the dynamics are complex and challenging, the Stoplight Analysis System offers decision-makers a practical tool to assess the readiness of partnering organisations. This approach can enhance the efficiency of conflict resolution efforts by ensuring that resources are directed towards partnerships with a higher likelihood of success, ultimately contributing to more effective and sustainable peacebuilding outcomes.

Keywords: collaboration, conflict resolution, partnerships, peacebuilding

Procedia PDF Downloads 60
978 Distinguishing Substance from Spectacle in Violent Extremist Propaganda through Frame Analysis

Authors: John Hardy

Abstract:

Over the last decade, the world has witnessed an unprecedented rise in the quality and availability of violent extremist propaganda. This phenomenon has been fueled primarily by three interrelated trends: rapid adoption of online content mediums by creators of violent extremist propaganda, increasing sophistication of violent extremist content production, and greater coordination of content and action across violent extremist organizations. In particular, the self-styled ‘Islamic State’ attracted widespread attention from its supporters and detractors alike by mixing shocking video and imagery content in with substantive ideological and political content. Although this practice was widely condemned for its brutality, it proved to be effective at engaging with a variety of international audiences and encouraging potential supporters to seek further information. The reasons for the noteworthy success of this kind of shock-value propaganda content remain unclear, despite many governments’ attempts to produce counterpropaganda. This study examines violent extremist propaganda distributed by five terrorist organizations between 2010 and 2016, using material released by the ‎Al Hayat Media Center of the Islamic State, Boko Haram, Al Qaeda, Al Qaeda in the Arabian Peninsula, and Al Qaeda in the Islamic Maghreb. The time period covers all issues of the infamous publications Inspire and Dabiq, as well as the most shocking video content released by the Islamic State and its affiliates. The study uses frame analysis to distinguish thematic from symbolic content in violent extremist propaganda by contrasting the ways that substantive ideology issues were framed against the use of symbols and violence to garner attention and to stylize propaganda. The results demonstrate that thematic content focuses significantly on diagnostic frames, which explain violent extremist groups’ causes, and prognostic frames, which propose solutions to addressing or rectifying the cause shared by groups and their sympathizers. Conversely, symbolic violence is primarily stylistic and rarely linked to thematic issues or motivational framing. Frame analysis provides a useful preliminary tool in disentangling substantive ideological and political content from stylistic brutality in violent extremist propaganda. This provides governments and researchers a method for better understanding the framing and content used to design narratives and propaganda materials used to promote violent extremism around the world. Increased capacity to process and understand violent extremist narratives will further enable governments and non-governmental organizations to develop effective counternarratives which promote non-violent solutions to extremists’ grievances.

Keywords: countering violent extremism, counternarratives, frame analysis, propaganda, terrorism, violent extremism

Procedia PDF Downloads 170
977 Effect of Fuel Type on Design Parameters and Atomization Process for Pressure Swirl Atomizer and Dual Orifice Atomizer for High Bypass Turbofan Engine

Authors: Mohamed K. Khalil, Mohamed S. Ragab

Abstract:

Atomizers are used in many engineering applications including diesel engines, petrol engines and spray combustion in furnaces as well as gas turbine engines. These atomizers are used to increase the specific surface area of the fuel, which achieve a high rate of fuel mixing and evaporation. In all combustion systems reduction in mean drop size is a challenge which has many advantages since it leads to rapid and easier ignition, higher volumetric heat release rate, wider burning range and lower exhaust concentrations of the pollutant emissions. Pressure atomizers have a different configuration for design such as swirl atomizer (simplex), dual orifice, spill return, plain orifice, duplex and fan spray. Simplex pressure atomizers are the most common type of all. Among all types of atomizers, pressure swirl types resemble a special category since they differ in quality of atomization, the reliability of operation, simplicity of construction and low expenditure of energy. But, the disadvantages of these atomizers are that they require very high injection pressure and have low discharge coefficient owing to the fact that the air core covers the majority of the atomizer orifice. To overcome these problems, dual orifice atomizer was designed. This paper proposes a detailed mathematical model design procedure for both pressure swirl atomizer (Simplex) and dual orifice atomizer, examines the effects of varying fuel type and makes a clear comparison between the two types. Using five types of fuel (JP-5, JA1, JP-4, Diesel and Bio-Diesel) as a case study, reveal the effect of changing fuel type and its properties on atomizers design and spray characteristics. Which effect on combustion process parameters; Sauter Mean Diameter (SMD), spray cone angle and sheet thickness with varying the discharge coefficient from 0.27 to 0.35 during takeoff for high bypass turbofan engines. The spray atomizer performance of the pressure swirl fuel injector was compared to the dual orifice fuel injector at the same differential pressure and discharge coefficient using Excel. The results are analyzed and handled to form the final reliability results for fuel injectors in high bypass turbofan engines. The results show that the Sauter Mean Diameter (SMD) in dual orifice atomizer is larger than Sauter Mean Diameter (SMD) in pressure swirl atomizer, the film thickness (h) in dual orifice atomizer is less than the film thickness (h) in pressure swirl atomizer. The Spray Cone Angle (α) in pressure swirl atomizer is larger than Spray Cone Angle (α) in dual orifice atomizer.

Keywords: gas turbine engines, atomization process, Sauter mean diameter, JP-5

Procedia PDF Downloads 161
976 Combining Nitrocarburisation and Dry Lubrication for Improving Component Lifetime

Authors: Kaushik Vaideeswaran, Jean Gobet, Patrick Margraf, Olha Sereda

Abstract:

Nitrocarburisation is a surface hardening technique often applied to improve the wear resistance of steel surfaces. It is considered to be a promising solution in comparison with other processes such as flame spraying, owing to the formation of a diffusion layer which provides mechanical integrity, as well as its cost-effectiveness. To improve other tribological properties of the surface such as the coefficient of friction (COF), dry lubricants are utilized. Currently, the lifetime of steel components in many applications using either of these techniques individually are faced with the limitations of the two: high COF for nitrocarburized surfaces and low wear resistance of dry lubricant coatings. To this end, the current study involves the creation of a hybrid surface using the impregnation of a dry lubricant on to a nitrocarburized surface. The mechanical strength and hardness of Gerster SA’s nitrocarburized surfaces accompanied by the impregnation of the porous outermost layer with a solid lubricant will create a hybrid surface possessing both outstanding wear resistance and a low friction coefficient and with high adherence to the substrate. Gerster SA has the state-of-the-art technology for the surface hardening of various steels. Through their expertise in the field, the nitrocarburizing process parameters (atmosphere, temperature, dwelling time) were optimized to obtain samples that have a distinct porous structure (in terms of size, shape, and density) as observed by metallographic and microscopic analyses. The porosity thus obtained is suitable for the impregnation of a dry lubricant. A commercially available dry lubricant with a thermoplastic matrix was employed for the impregnation process, which was optimized to obtain a void-free interface with the surface of the nitrocarburized layer (henceforth called hybrid surface). In parallel, metallic samples without nitrocarburisation were also impregnated with the same dry lubricant as a reference (henceforth called reference surface). The reference and the nitrocarburized surfaces, with and without the dry lubricant were tested for their tribological behavior by sliding against a quenched steel ball using a nanotribometer. Without any lubricant, the nitrocarburized surface showed a wear rate 5x lower than the reference metal. In the presence of a thin film of dry lubricant ( < 2 micrometers) and under the application of high loads (500 mN or ~800 MPa), while the COF for the reference surface increased from ~0.1 to > 0.3 within 120 m, the hybrid surface retained a COF < 0.2 for over 400m of sliding. In addition, while the steel ball sliding against the reference surface showed heavy wear, the corresponding ball sliding against the hybrid surface showed very limited wear. Observations of the sliding tracks in the hybrid surface using Electron Microscopy show the presence of the nitrocarburized nodules as well as the lubricant, whereas no traces of the lubricant were found in the sliding track on the reference surface. In this manner, the clear advantage of combining nitrocarburisation with the impregnation of a dry lubricant towards forming a hybrid surface has been demonstrated.

Keywords: dry lubrication, hybrid surfaces, improved wear resistance, nitrocarburisation, steels

Procedia PDF Downloads 119
975 The Prediction of Reflection Noise and Its Reduction by Shaped Noise Barriers

Authors: I. L. Kim, J. Y. Lee, A. K. Tekile

Abstract:

In consequence of the very high urbanization rate of Korea, the number of traffic noise damages in areas congested with population and facilities is steadily increasing. The current environmental noise levels data in major cities of the country show that the noise levels exceed the standards set for both day and night times. This research was about comparative analysis in search for optimal soundproof panel shape and design factor that can minimize sound reflection noise. In addition to the normal flat-type panel shape, the reflection noise reduction of swelling-type, combined swelling and curved-type, and screen-type were evaluated. The noise source model Nord 2000, which often provides abundant information compared to models for the similar purpose, was used in the study to determine the overall noise level. Based on vehicle categorization in Korea, the noise levels for varying frequency from different heights of the sound source (directivity heights of Harmonize model) have been calculated for simulation. Each simulation has been made using the ray-tracing method. The noise level has also been calculated using the noise prediction program called SoundPlan 7.2, for comparison. The noise level prediction was made at 15m (R1), 30 m (R2) and at middle of the road, 2m (R3) receiving the point. By designing the noise barriers by shape and running the prediction program by inserting the noise source on the 2nd lane to the noise barrier side, among the 6 lanes considered, the reflection noise slightly decreased or increased in all noise barriers. At R1, especially in the cases of the screen-type noise barriers, there was no reduction effect predicted in all conditions. However, the swelling-type showed a decrease of 0.7~1.2 dB at R1, performing the best reduction effect among the tested noise barriers. Compared to other forms of noise barriers, the swelling-type was thought to be the most suitable for reducing the reflection noise; however, since a slight increase was predicted at R2, further research based on a more sophisticated categorization of related design factors is necessary. Moreover, as swellings are difficult to produce and the size of the modules are smaller than other panels, it is challenging to install swelling-type noise barriers. If these problems are solved, its applicable region will not be limited to other types of noise barriers. Hence, when a swelling-type noise barrier is installed at a downtown region where the amount of traffic is increasing every day, it will both secure visibility through the transparent walls and diminish any noise pollution due to the reflection. Moreover, when decorated with shapes and design, noise barriers will achieve a visual attraction than a flat-type one and thus will alleviate any psychological hardships related to noise, other than the unique physical soundproofing functions of the soundproof panels.

Keywords: reflection noise, shaped noise barriers, sound proof panel, traffic noise

Procedia PDF Downloads 504
974 A Comparison and Discussion of Modern Anaesthetic Techniques in Elective Lower Limb Arthroplasties

Authors: P. T. Collett, M. Kershaw

Abstract:

Introduction: The discussion regarding which method of anesthesia provides better results for lower limb arthroplasty is a continuing debate. Multiple meta-analysis has been performed with no clear consensus. The current recommendation is to use neuraxial anesthesia for lower limb arthroplasty; however, the evidence to support this decision is weak. The Enhanced Recovery After Surgery (ERAS) society has recommended, either technique can be used as part of a multimodal anesthetic regimen. A local study was performed to see if the current anesthetic practice correlates with the current recommendations and to evaluate the efficacy of the different techniques utilized. Method: 90 patients who underwent total hip or total knee replacements at Nevill Hall Hospital between February 2019 to July 2019 were reviewed. Data collected included the anesthetic technique, day one opiate use, pain score, and length of stay. The data was collected from anesthetic charts, and the pain team follows up forms. Analysis: The average of patients undergoing lower limb arthroplasty was 70. Of those 83% (n=75) received a spinal anaesthetic and 17% (n=15) received a general anaesthetic. For patients undergoing knee replacement under general anesthetic the average day, one pain score was 2.29 and 1.94 if a spinal anesthetic was performed. For hip replacements, the scores were 1.87 and 1.8, respectively. There was no statistical significance between these scores. Day 1 opiate usage was significantly higher in knee replacement patients who were given a general anesthetic (45.7mg IV morphine equivalent) vs. those who were operated on under spinal anesthetic (19.7mg). This difference was not noticeable in hip replacement patients. There was no significant difference in length of stay between the two anesthetic techniques. Discussion: There was no significant difference in the day one pain score between the patients who received a general or spinal anesthetic for either knee or hip replacements. The higher pain scores in the knee replacement group overall are consistent with this being a more painful procedure. This is a small patient population, which means any difference between the two groups is unlikely to be representative of a larger population. The pain scale has 4 points, which means it is difficult to identify a significant difference between pain scores. Conclusion: There is currently little standardization between the different anesthetic approaches utilized in Nevill Hall Hospital. This is likely due to the lack of adherence to a standardized anesthetic regimen. In accordance with ERAS recommends a standard anesthetic protocol is a core component. The results of this study and the guidance from the ERAS society will support the implementation of a new health board wide ERAS protocol.

Keywords: anaesthesia, orthopaedics, intensive care, patient centered decision making, treatment escalation

Procedia PDF Downloads 122
973 Pulsed-Wave Doppler Ultrasonographic Assessment of the Maximum Blood Velocity in Common Carotid Artery in Horses after Administration of Ketamine and Acepromazine

Authors: Saman Ahani, Aboozar Dehghan, Roham Vali, Hamid Salehian, Amin Ebrahimi

Abstract:

Pulsed-wave (PW) doppler ultrasonography is a non-invasive, relatively accurate imaging technique that can measure blood speed. The imaging could be obtained via the common carotid artery, as one of the main vessels supplying the blood of vital organs. In horses, factors such as susceptibility to depression of the cardiovascular system and their large muscular mass have rendered them vulnerable to changes in blood speed. One of the most important factors causing blood velocity changes is the administration of anesthetic drugs, including Ketamine and Acepromazine. Thus, in this study, the Pulsed-wave doppler technique was performed to assess the highest blood velocity in the common carotid artery following administration of Ketamine and Acepromazine. Six male and six female healthy Kurdish horses weighing 351 ± 46 kg (mean ± SD) and aged 9.2 ± 1.7 years (mean ± SD) were housed under animal welfare guidelines. After fasting for six hours, the normal blood flow velocity in the common carotid artery was measured using a Pulsed-wave doppler ultrasonography machine (BK Medical, Denmark), and a high-frequency linear transducer (12 MHz) without applying any sedative drugs as a control group. The same procedure was repeated after each individual received the following medications: 1.1, 2.2 mg/kg Ketamine (Pfizer, USA), and 0.5, 1 mg/kg Acepromizine (RACEHORSE MEDS, Ukraine), with an interval of 21 days between the administration of each dose and/or drug. The ultrasonographic study was done five (T5) and fifteen (T15) minutes after injecting each dose intravenously. Lastly, the statistical analysis was performed using SPSS software version 22 for Windows and a P value less than 0.05 was considered to be statistically significant. Five minutes after administration of Ketamine (1.1, 2.2 mg/kg) in both male and female horses, the blood velocity decreased to 38.44, 34.53 cm/s in males, and 39.06, 34.10 cm/s in females in comparison to the control group (39.59 and 40.39 cm/s in males and females respectively) while administration of 0.5 mg/kg Acepromazine led to a significant rise (73.15 and 55.80 cm/s in males and females respectively) (p<0.05). It means that the most drastic change in blood velocity, regardless of gender, refers to the latter dose/drug. In both medications and both genders, the increase in doses led to a decrease in blood velocity compared to the lower dose of the same drug. In all experiments in this study, the blood velocity approached its normal value at T15. In another study comparing the blood velocity changes affected by Ketamine and Acepromazine through femoral arteries, the most drastic changes were attributed to Ketamine; however, in this experiment, the maximum blood velocity was observed following administration of Acepromazine via the common carotid artery. Therefore, further experiments using the same medications are suggested using Pulsed-wave doppler measuring the blood velocity changes in both femoral and common carotid arteries simultaneously.

Keywords: Acepromazine, common carotid artery, horse, ketamine, pulsed-wave doppler ultrasonography

Procedia PDF Downloads 122
972 Time to Retire Rubber Crumb: How Soft Fall Playgrounds are Threatening Australia’s Great Barrier Reef

Authors: Michelle Blewitt, Scott P. Wilson, Heidi Tait, Juniper Riordan

Abstract:

Rubber crumb is a physical and chemical pollutant of concern for the environment and human health, warranting immediate investigations into its pathways to the environment and potential impacts. This emerging microplastic is created by shredding end-of-life tyres into ‘rubber crumb’ particles between 1-5mm used on synthetic turf fields and soft-fall playgrounds as a solution to intensifying tyre waste worldwide. Despite having known toxic and carcinogenic properties, studies into the transportation pathways and movement patterns of rubber crumbs from these surfaces remain in their infancy. To address this deficit, AUSMAP, the Australian Microplastic Assessment Project, in partnership with the Tangaroa Blue Foundation, conducted a study to quantify crumb loss from soft-fall surfaces. To our best knowledge, this is the first of its kind, with funding for the audits being provided by the Australian Government’s Reef Trust. Sampling occurred at 12 soft-fall playgrounds within the Great Barrier Reef Catchment Area on Australia’s North-East coast, in close proximity to the United Nations World Heritage Listed Reef. Samples were collected over a 12-month period using randomized sediment cores at 0, 2 and 4 meters away from the playground edge along a 20-meter transect. This approach facilitated two objectives pertaining to particle movement: to establish that crumb loss is occurring and that it decreases with distance from the soft-fall surface. Rubber crumb abundance was expressed as a total value and used to determine an expected average of rubber crumb loss per m2. An Analysis of Variance (ANOVA) was used to compare the differences in crumb abundance at each interval from the playground. Site characteristics, including surrounding sediment type, playground age, degree of ultra-violet exposure and amount of foot traffic, were additionally recorded for the comparison. Preliminary findings indicate that crumb is being lost at considerable rates from soft-fall playgrounds in the region, emphasizing an urgent need to further examine it as a potential source of aquatic pollution, soil contamination and threat to individuals who regularly utilize these surfaces. Additional implications for the future of rubber crumbs as a fit-for-purpose recycling initiative will be discussed with regard to industry, governments and the economic burden of surface maintenance and/ or replacement.

Keywords: microplastics, toxic rubber crumb, litter pathways, marine environment

Procedia PDF Downloads 88
971 Muscle and Cerebral Regional Oxygenation in Preterm Infants with Shock Using Near-Infrared Spectroscopy

Authors: Virany Diana, Martono Tri Utomo, Risa Etika

Abstract:

Background: Shock is one severe condition that can be a major cause of morbidity and mortality in the Neonatal Intensive Care Unit. Preterm infants are very susceptible to shock caused by many complications such as asphyxia, patent ductus arteriosus, intra ventricle haemorrhage, necrotizing enterocolitis, persistent pulmonal hypertension of the newborn, and septicaemia. Limited hemodynamic monitoring for early detection of shock causes delayed intervention and comprises the outcomes. Clinical parameters still used in neonatal shock detection, such as Capillary Refill Time, heart rate, cold extremity, and urine production. Blood pressure is most frequently used to evaluate preterm's circulation, but hypotension indicates uncompensated shock. Near-infrared spectroscopy (NIRS) is known as a noninvasive tool for monitoring and detecting the state of inadequate tissue perfusion. Muscle oxygen saturation shows decreased cardiac output earlier than systemic parameters of tissue oxygenation when cerebral regional oxygen saturation is still stabilized by autoregulation. However, to our best knowledge, until now, no study has analyzed the decrease of muscle oxygen regional saturation (mRSO₂) and the ratio of muscle and cerebral oxygen regional saturation (mRSO₂/cRSO₂) by NIRS in preterm with shock. Purpose: The purpose of this study is to analyze the decrease of mRSO₂ and ratio of muscle to cerebral oxygen regional saturation (mRSO₂/cRSO₂) by NIRS in preterm with shock. Patients and Methods: This cross-sectional study was conducted on preterm infants with 28-34 weeks gestational age, admitted to the NICU of Dr. Soetomo Hospital from November to January 2022. Patients were classified into two groups: shock and non-shock. The diagnosis of shock is based on clinical criteria (tachycardia, prolonged CRT, cold extremity, decreased urine production, and MAP Blood Pressure less than GA in weeks). Measurement of mRSO₂ and cRSO₂ by NIRS was performed by the doctor in charge when the patient came to NICU. Results: We enrolled 40 preterm infants. The initial conventional hemodynamic parameter as the basic diagnosis of shock showed significant differences in all variables. Preterm with shock had higher mean HR (186.45±1.5), lower MAP (29.8±2.1), and lower SBP (45.1±4.28) than non-shock children, and most had a prolonged CRT. The patients’ outcome was not a significant difference between shock and non-shock patients. The mean mRSO₂ in the shock and non-shock groups were 33,65 ± 11,32 vs. 69,15 ± 3,96 (p=0.001), and the mean ratio mRSO₂/cRSO₂ 0,45 ± 0,12 vs. 0,84 ± 0,43 (p=0,001), were significantly different. The mean cRSO₂ in the shock and non-shock groups were 71,60 ± 4,90 vs. 81,85 ± 7,85 (p 0.082), not significantly different. Conclusion: The decrease of mRSO₂ and ratio of mRSO₂/cRSO₂ can differentiate between shock and non-shock in the preterm infant when cRSO₂ is still normal.

Keywords: preterm infant, regional muscle oxygen saturation, regional cerebral oxygen saturation, NIRS, shock

Procedia PDF Downloads 88
970 The Validation of RadCalc for Clinical Use: An Independent Monitor Unit Verification Software

Authors: Junior Akunzi

Abstract:

In the matter of patient treatment planning quality assurance in 3D conformational therapy (3D-CRT) and volumetric arc therapy (VMAT or RapidArc), the independent monitor unit verification calculation (MUVC) is an indispensable part of the process. Concerning 3D-CRT treatment planning, the MUVC can be performed manually applying the standard ESTRO formalism. However, due to the complex shape and the amount of beams in advanced treatment planning technic such as RapidArc, the manual independent MUVC is inadequate. Therefore, commercially available software such as RadCalc can be used to perform the MUVC in complex treatment planning been. Indeed, RadCalc (version 6.3 LifeLine Inc.) uses a simplified Clarkson algorithm to compute the dose contribution for individual RapidArc fields to the isocenter. The purpose of this project is the validation of RadCalc in 3D-CRT and RapidArc for treatment planning dosimetry quality assurance at Antoine Lacassagne center (Nice, France). Firstly, the interfaces between RadCalc and our treatment planning systems (TPS) Isogray (version 4.2) and Eclipse (version13.6) were checked for data transfer accuracy. Secondly, we created test plans in both Isogray and Eclipse featuring open fields, wedges fields, and irregular MLC fields. These test plans were transferred from TPSs according to the radiotherapy protocol of DICOM RT to RadCalc and the linac via Mosaiq (version 2.5). Measurements were performed in water phantom using a PTW cylindrical semiflex ionisation chamber (0.3 cm³, 31010) and compared with the TPSs and RadCalc calculation. Finally, 30 3D-CRT plans and 40 RapidArc plans created with patients CT scan were recalculated using the CT scan of a solid PMMA water equivalent phantom for 3D-CRT and the Octavius II phantom (PTW) CT scan for RapidArc. Next, we measure the doses delivered into these phantoms for each plan with a 0.3 cm³ PTW 31010 cylindrical semiflex ionisation chamber (3D-CRT) and 0.015 cm³ PTW PinPoint ionisation chamber (Rapidarc). For our test plans, good agreements were found between calculation (RadCalc and TPSs) and measurement (mean: 1.3%; standard deviation: ± 0.8%). Regarding the patient plans, the measured doses were compared to the calculation in RadCalc and in our TPSs. Moreover, RadCalc calculations were compared to Isogray and Eclispse ones. Agreements better than (2.8%; ± 1.2%) were found between RadCalc and TPSs. As for the comparison between calculation and measurement the agreement for all of our plans was better than (2.3%; ± 1.1%). The independent MU verification calculation software RadCal has been validated for clinical use and for both 3D-CRT and RapidArc techniques. The perspective of this project includes the validation of RadCal for the Tomotherapy machine installed at centre Antoine Lacassagne.

Keywords: 3D conformational radiotherapy, intensity modulated radiotherapy, monitor unit calculation, dosimetry quality assurance

Procedia PDF Downloads 210
969 Evaluation of Cooperative Hand Movement Capacity in Stroke Patients Using the Cooperative Activity Stroke Assessment

Authors: F. A. Thomas, M. Schrafl-Altermatt, R. Treier, S. Kaufmann

Abstract:

Stroke is the main cause of adult disability. Especially upper limb function is affected in most patients. Recently, cooperative hand movements have been shown to be a promising type of upper limb training in stroke rehabilitation. In these movements, which are frequently found in activities of daily living (e.g. opening a bottle, winding up a blind), the force of one upper limb has to be equally counteracted by the other limb to successfully accomplish a task. The use of standardized and reliable clinical assessments is essential to evaluate the efficacy of therapy and the functional outcome of a patient. Many assessments for upper limb function or impairment are available. However, the evaluation of cooperative hand movement tasks are rarely included in those. Thus, the aim of this study was (i) to develop a novel clinical assessment (CASA - Cooperative Activity Stroke Assessment) for the evaluation of patients’ capacity to perform cooperative hand movements and (ii) to test its inter- and interrater reliability. Furthermore, CASA scores were compared to current gold standard assessments for upper extremity in stroke patients (i.e. Fugl-Meyer Assessment, Box & Blocks Test). The CASA consists of five cooperative activities of daily living including (1) opening a jar, (2) opening a bottle, (3) open and closing of a zip, (4) unscrew a nut and (5) opening a clipbox. Here, the goal is to accomplish the tasks as fast as possible. In addition to the quantitative rating (i.e. time) which is converted to a 7-point scale, also the quality of the movement is rated in a 4-point scale. To test the reliability of CASA, fifteen stroke subjects were tested within a week twice by the same two raters. Intra-and interrater reliability was calculated using the intraclass correlation coefficient (ICC) for total CASA score and single items. Furthermore, Pearson-correlation was used to compare the CASA scores to the scores of Fugl-Meyer upper limb assessment and the box and blocks test, which were assessed in every patient additionally to the CASA. ICC scores of the total CASA score indicated an excellent- and single items established a good to excellent inter- and interrater reliability. Furthermore, the CASA score was significantly correlated to the Fugl-Meyer and Box & Blocks score. The CASA provides a reliable assessment for cooperative hand movements which are crucial for many activities of daily living. Due to its non-costly setup, easy and fast implementation, we suggest it to be well suitable for clinical application. In conclusion, the CASA is a useful tool in assessing the functional status and therapy related recovery in cooperative hand movement capacity in stroke patients.

Keywords: activitites of daily living, clinical assessment, cooperative hand movements, reliability, stroke

Procedia PDF Downloads 316
968 Newspaper Headlines as Tool for Political Propaganda in Nigeria: Trend Analysis of Implications on Four Presidential Elections

Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi

Abstract:

The role of the media in political discourse cannot be overemphasized as they form an important part of societal development. The media institution is considered the fourth estate of the realm because it serves as a check and balance to the arms of government (Executive, Legislature and Judiciary) especially in a democratic setup, and makes public office holders accountable to the people. They scrutinize the political candidates and conduct a holistic analysis of the achievement of the government in order to make the people’s representative accountable to the electorates. The media in Nigeria play a seminal role in shaping how people vote during elections. Newspaper headlines are catchy phrases that easily capture the attention of the audience and call them (audience) to action. Research conducted on newspaper headlines looks at the linguistic aspect and how the tenses used has a resultant effect on peoples’ attitude and behaviour. Communication scholars have also conducted studies that interrogate whether newspaper headlines influence peoples' voting patterns and decisions. Propaganda and negative stories about political opponents are stapling features in electioneering campaigns. Nigerian newspaper readers have the characteristic of scanning newspaper headlines. And the question is whether politicians effectively have played into this tendency to brand opponents negatively, based on half-truths and inadequate information. This study illustrates major trends in the Nigerian political landscape looking at the past four presidential elections and frames the progress of the research in the extant body of political propaganda research in Africa. The study will use the quantitative content analysis of newspaper headlines from 2007 to 2019 to be able to ascertain whether newspaper headlines had any effect on the election results of the presidential elections during these years. This will be supplemented by Key Informant Interviews of political scientists or experts to draw further inferences from the quantitative data. Drawing on newspaper headlines of selected newspapers in Nigeria that have a political propaganda angle for the presidential elections, the analysis will correspond to and complements extant descriptions of how the field of political propaganda has been developed in Nigeria, providing evidence of four presidential elections that have shaped Nigerian politics. Understanding the development of the behavioural change of the electorates provide useful context for trend analysis in political propaganda communication. The findings will contribute to how newspaper headlines are used partly or wholly to decide the outcome of presidential elections in Nigeria.

Keywords: newspaper headlines, political propaganda, presidential elections, trend analysis

Procedia PDF Downloads 230
967 The Surgical Trainee Perception of the Operating Room Educational Environment

Authors: Neal Rupani

Abstract:

Background: A surgical trainee has limited learning opportunities in the operating room in order to gain an ever-increasing standard of surgical skill, competency, and proficiency. These opportunities continue to decline due to numerous factors such as the European Working Time Directive and increasing requirement for service provision. It is therefore imperative to obtain the highest educational value from each educational opportunity. A measure that has yet to be validated in England on surgical trainees called the Operating Room Educational Environment Measure (OREEM) has been developed to identify and evaluate each component of the educational environment with a view to steer future change in optimising educational events in theatre. Aims: The aims of the study are to assess the reliability of the OREEM within England and to evaluate the surgical trainee’s objective perspective of the current operating room educational environment within one region within England. Methods: Using a quantitative study approach, data was collected over one month from surgical trainees within Health Education Thames Valley (Oxford) using an online questionnaire consisting of demographic data, the OREEM, a global satisfaction score. Results: 140 surgical trainees were invited to the study, with an online response of 54 participants (response rate = 38.6%). The OREEM was shown to have good internal consistency (α = 0.906, variables = 40) and unidimensionality, along with all four of its subgroups. The mean OREEM score was 79.16%. The areas highlighted for improvement predominantly focused on improving learning opportunities (average subscale score = 72.9%) and conducting pre- and post-operative teaching (average score = 70.4%). The trainee perception is most satisfactory for the level of supervision and workload (average subscale score = 82.87%). There was no differences found between gender (U = 191.5, p = 0.535) or type of hospital (U = 258.0, p = 0.099), but the learning environment was favoured towards senior trainees (U = 223.5, p = 0.017). There was strong correlation between OREEM and the global satisfaction score (r = 0.755, p<0.001). Conclusions: The OREEM was shown to be reliable in measuring the educational environment in the operating room. This can be used to identify potentially modifiable components for improvement and as an audit tool to ensure high standards are being met. The current perception of the education environment in Health Education Thames Valley is satisfactory, and modifiable internal and external factors such as reducing service provision requirements, empowering trainees to plan lists, creating a team-working ethic between all personnel, and using tools that maximise learning from each operation have been identified to improve learning in the future. There is a favourable attitude to use of such improvement tools, especially for those currently dissatisfied.

Keywords: education environment, surgery, post-graduate education, OREEM

Procedia PDF Downloads 179
966 Economic Impact and Benefits of Integrating Augmented Reality Technology in the Healthcare Industry: A Systematic Review

Authors: Brenda Thean I. Lim, Safurah Jaafar

Abstract:

Augmented reality (AR) in the healthcare industry has been gaining popularity in recent years, principally in areas of medical education, patient care and digital health solutions. One of the drivers in deciding to invest in AR technology is the potential economic benefits it could bring for patients and healthcare providers, including the pharmaceutical and medical technology sectors. Works of literature have shown that the benefits and impact of AR technologies have left trails of achievements in improving medical education and patient health outcomes. However, little has been published on the economic impact of AR in healthcare, a very resource-intensive industry. This systematic review was performed on studies focused on the benefits and impact of AR in healthcare to appraise if they meet the founded quality criteria so as to identify relevant publications for an in-depth analysis of the economic impact assessment. The literature search was conducted using multiple databases such as PubMed, Cochrane, Science Direct and Nature. Inclusion criteria include research papers on AR implementation in healthcare, from education to diagnosis and treatment. Only papers written in English language were selected. Studies on AR prototypes were excluded. Although there were many articles that have addressed the benefits of AR in the healthcare industry in the area of medical education, treatment and diagnosis and dental medicine, there were very few publications that identified the specific economic impact of technology within the healthcare industry. There were 13 publications included in the analysis based on the inclusion criteria. Out of the 13 studies, none comprised a systematically comprehensive cost impact evaluation. An outline of the cost-effectiveness and cost-benefit framework was made based on an AR article from another industry as a reference. This systematic review found that while the advancements of AR technology is growing rapidly and industries are starting to adopt them into respective sectors, the technology and its advancements in healthcare were still in their early stages. There are still plenty of room for further advancements and integration of AR into different sectors within the healthcare industry. Future studies will require more comprehensive economic analyses and costing evaluations to enable economic decisions for or against implementing AR technology in healthcare. This systematic review concluded that the current literature lacked detailed examination and conduct of economic impact and benefit analyses. Recommendations for future research would be to include details of the initial investment and operational costs for the AR infrastructure in healthcare settings while comparing the intervention to its conventional counterparts or alternatives so as to provide a comprehensive comparison on impact, benefit and cost differences.

Keywords: augmented reality, benefit, economic impact, healthcare, patient care

Procedia PDF Downloads 201
965 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 90
964 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model

Authors: Mohammad Zamani, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.

Keywords: circular vertical, spillway, numerical model, boundary conditions

Procedia PDF Downloads 80