Search results for: motion errors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2243

Search results for: motion errors

1553 Adaptive Motion Compensated Spatial Temporal Filter of Colonoscopy Video

Authors: Nidhal Azawi

Abstract:

Colonoscopy procedure is widely used in the world to detect an abnormality. Early diagnosis can help to heal many patients. Because of the unavoidable artifacts that exist in colon images, doctors cannot detect a colon surface precisely. The purpose of this work is to improve the visual quality of colonoscopy videos to provide better information for physicians by removing some artifacts. This work complements a series of work consisting of three previously published papers. In this paper, Optic flow is used for motion compensation, and then consecutive images are aligned/registered to integrate some information to create a new image that has or reveals more information than the original one. Colon images have been classified into informative and noninformative images by using a deep neural network. Then, two different strategies were used to treat informative and noninformative images. Informative images were treated by using Lucas Kanade (LK) with an adaptive temporal mean/median filter, whereas noninformative images are treated by using Lucas Kanade with a derivative of Gaussian (LKDOG) with adaptive temporal median images. A comparison result showed that this work achieved better results than that results in the state- of- the- art strategies for the same degraded colon images data set, which consists of 1000 images. The new proposed algorithm reduced the error alignment by about a factor of 0.3 with a 100% successfully image alignment ratio. In conclusion, this algorithm achieved better results than the state-of-the-art approaches in case of enhancing the informative images as shown in the results section; also, it succeeded to convert the non-informative images that have very few details/no details because of the blurriness/out of focus or because of the specular highlight dominate significant amount of an image to informative images.

Keywords: optic flow, colonoscopy, artifacts, spatial temporal filter

Procedia PDF Downloads 114
1552 Pavement Management for a Metropolitan Area: A Case Study of Montreal

Authors: Luis Amador Jimenez, Md. Shohel Amin

Abstract:

Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.

Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization

Procedia PDF Downloads 461
1551 Biomechanical Evaluation for Minimally Invasive Lumbar Decompression: Unilateral Versus Bilateral Approaches

Authors: Yi-Hung Ho, Chih-Wei Wang, Chih-Hsien Chen, Chih-Han Chang

Abstract:

Unilateral laminotomy and bilateral laminotomies were successful decompressions methods for managing spinal stenosis that numerous studies have reported. Thus, unilateral laminotomy was rated technically much more demanding than bilateral laminotomies, whereas the bilateral laminotomies were associated with a positive benefit to reduce more complications. There were including incidental durotomy, increased radicular deficit, and epidural hematoma. However, no relative biomechanical analysis for evaluating spinal instability treated with unilateral and bilateral laminotomies. Therefore, the purpose of this study was to compare the outcomes of different decompressions methods by experimental and finite element analysis. Three porcine lumbar spines were biomechanically evaluated for their range of motion, and the results were compared following unilateral or bilateral laminotomies. The experimental protocol included flexion and extension in the following procedures: intact, unilateral, and bilateral laminotomies (L2–L5). The specimens in this study were tested in flexion (8 Nm) and extension (6 Nm) of pure moment. Spinal segment kinematic data was captured by using the motion tracking system. A 3D finite element lumbar spine model (L1-S1) containing vertebral body, discs, and ligaments were constructed. This model was used to simulate the situation of treating unilateral and bilateral laminotomies at L3-L4 and L4-L5. The bottom surface of S1 vertebral body was fully geometrically constrained in this study. A 10 Nm pure moment also applied on the top surface of L1 vertebral body to drive lumbar doing different motion, such as flexion and extension. The experimental results showed that in the flexion, the ROMs (±standard deviation) of L3–L4 were 1.35±0.23, 1.34±0.67, and 1.66±0.07 degrees of the intact, unilateral, and bilateral laminotomies, respectively. The ROMs of L4–L5 were 4.35±0.29, 4.06±0.87, and 4.2±0.32 degrees, respectively. No statistical significance was observed in these three groups (P>0.05). In the extension, the ROMs of L3–L4 were 0.89±0.16, 1.69±0.08, and 1.73±0.13 degrees, respectively. In the L4-L5, the ROMs were 1.4±0.12, 2.44±0.26, and 2.5±0.29 degrees, respectively. Significant differences were observed among all trials, except between the unilateral and bilateral laminotomy groups. At the simulation results portion, the similar results were discovered with the experiment. No significant differences were found at L4-L5 both flexion and extension in each group. Only 0.02 and 0.04 degrees variation were observed during flexion and extension between the unilateral and bilateral laminotomy groups. In conclusions, the present results by finite element analysis and experimental reveal that no significant differences were observed during flexion and extension between unilateral and bilateral laminotomies in short-term follow-up. From a biomechanical point of view, bilateral laminotomies seem to exhibit a similar stability as unilateral laminotomy. In clinical practice, the bilateral laminotomies are likely to reduce technical difficulties and prevent perioperative complications; this study proved this benefit through biomechanical analysis. The results may provide some recommendations for surgeons to make the final decision.

Keywords: unilateral laminotomy, bilateral laminotomies, spinal stenosis, finite element analysis

Procedia PDF Downloads 403
1550 Lexical-Semantic Deficits in Sinhala Speaking Persons with Post Stroke Aphasia: Evidence from Single Word Auditory Comprehension Task

Authors: D. W. M. S. Samarathunga, Isuru Dharmarathne

Abstract:

In aphasia, various levels of symbolic language processing (semantics) are affected. It is shown that Persons with Aphasia (PWA) often experience more problems comprehending some categories of words than others. The study aimed to determine lexical semantic deficits seen in Auditory Comprehension (AC) and to describe lexical-semantic deficits across six selected word categories. Thirteen (n =13) persons diagnosed with post-stroke aphasia (PSA) were recruited to perform an AC task. Foods, objects, clothes, vehicles, body parts and animals were selected as the six categories. As the test stimuli, black and white line drawings were adapted from a picture set developed for semantic studies by Snodgrass and Vanderwart. A pilot study was conducted with five (n=5) healthy nonbrain damaged Sinhala speaking adults to decide familiarity and applicability of the test material. In the main study, participants were scored based on the accuracy and number of errors shown. The results indicate similar trends of lexical semantic deficits identified in the literature confirming ‘animals’ to be the easiest category to comprehend. Mann-Whitney U test was performed to determine the association between the selected variables and the participants’ performance on AC task. No statistical significance was found between the errors and the type of aphasia reflecting similar patterns described in aphasia literature in other languages. The current study indicates the presence of selectivity of lexical semantic deficits in AC and a hierarchy was developed based on the complexity of the categories to comprehend by Sinhala speaking PWA, which might be clinically beneficial when improving language skills of Sinhala speaking persons with post-stroke aphasia. However, further studies on aphasia should be conducted with larger samples for a longer period to study deficits in Sinhala and other Sri Lankan languages (Tamil and Malay).

Keywords: aphasia, auditory comprehension, selective lexical-semantic deficits, semantic categories

Procedia PDF Downloads 255
1549 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 114
1548 Advanced Techniques in Robotic Mitral Valve Repair

Authors: Abraham J. Rizkalla, Tristan D. Yan

Abstract:

Purpose: Durable mitral valve repair is preferred to a replacement, avoiding the need for anticoagulation or re-intervention, with a reduced risk of endocarditis. Robotic mitral repair has been gaining favour globally as a safe, effective, and reproducible method of minimally invasive valve repair. In this work, we showcase the use of the Davinci© Xi robotic platform to perform several advanced techniques, working synergistically to achieve successful mitral repair in advanced mitral disease. Techniques: We present the case of a Barlow type mitral valve disease with a tall and redundant posterior leaflet resulting in severe mitral regurgitation and systolic anterior motion. Firstly, quadrangular resection of P2 is performed to remove the excess and redundant leaflet. Secondly, a sliding leaflet plasty of P1 and P3 is used to reconstruct the posterior leaflet. To anchor the newly formed posterior leaflet to the papillary muscle, CV-4 Goretex neochordae are fashioned using the innovative string, ruler, and bulldog technique. Finally, mitral valve annuloplasty and closure of a patent foramen ovale complete the repair. Results: There was no significant residual mitral regurgitation and complete resolution of the systolic anterior motion of the mitral valve on post operative transoesophageal echocardiography. Conclusion: This work highlights the robotic approach to complex repair techniques for advanced mitral valve disease. Familiarity with resection and sliding plasty, neochord implantation, and annuloplasty allows the modern cardiac surgeon to achieve a minimally-invasive and durable mitral valve repair when faced with complex mitral valve pathology.

Keywords: robotic mitral valve repair, Barlow's valve, sliding plasty, neochord, annuloplasty, quadrangular resection

Procedia PDF Downloads 87
1547 Evaluation of Correct Usage, Comfort and Fit of Personal Protective Equipment in Construction Work

Authors: Anna-Lisa Osvalder, Jonas Borell

Abstract:

There are several reasons behind the use, non-use, or inadequate use of personal protective equipment (PPE) in the construction industry. Comfort and accurate size support proper use, while discomfort, misfit, and difficulties to understand how the PPEs should be handled inhibit correct usage. The need for several protective equipments simultaneously might also create problems. The purpose of this study was to analyse the correct usage, comfort, and fit of different types of PPEs used for construction work. Correct usage was analysed as guessability, i.e., human perceptions of how to don, adjust, use, and doff the equipment, and if used as intended. The PPEs tested individually or in combinations were a helmet, ear protectors, goggles, respiratory masks, gloves, protective cloths, and safety harnesses. First, an analytical evaluation was performed with ECW (enhanced cognitive walkthrough) and PUEA (predictive use error analysis) to search for usability problems and use errors during handling and use. Then usability tests were conducted to evaluate guessability, comfort, and fit with 10 test subjects of different heights and body constitutions. The tests included observations during donning, five different outdoor work tasks, and doffing. The think-aloud method, short interviews, and subjective estimations were performed. The analytical evaluation showed that some usability problems and use errors arise during donning and doffing, but with minor severity, mostly causing discomfort. A few use errors and usability problems arose for the safety harness, especially for novices, where some could lead to a high risk of severe incidents. The usability tests showed that discomfort arose for all test subjects when using a combination of PPEs, increasing over time. For instance, goggles, together with the face mask, caused pressure, chafing at the nose, and heat rash on the face. This combination also limited sight of vision. The helmet, in combination with the goggles and ear protectors, did not fit well and caused uncomfortable pressure at the temples. No major problems were found with the individual fit of the PPEs. The ear protectors, goggles, and face masks could be adjusted for different head sizes. The guessability for how to don and wear the combination of PPE was moderate, but it took some time to adjust them for a good fit. The guessability was poor for the safety harness; few clues in the design showed how it should be donned, adjusted, or worn on the skeletal bones. Discomfort occurred when the straps were tightened too much. All straps could not be adjusted for somebody's constitutions leading to non-optimal safety. To conclude, if several types of PPEs are used together, discomfort leading to pain is likely to occur over time, which can lead to misuse, non-use, or reduced performance. If people who are not regular users should wear a safety harness correctly, the design needs to be improved for easier interpretation, correct position of the straps, and increased possibilities for individual adjustments. The results from this study can be a base for re-design ideas for PPE, especially when they should be used in combinations.

Keywords: construction work, PPE, personal protective equipment, misuse, guessability, usability

Procedia PDF Downloads 88
1546 An Approach to Solving Some Inverse Problems for Parabolic Equations

Authors: Bolatbek Rysbaiuly, Aliya S. Azhibekova

Abstract:

Problems concerning the interpretation of the well testing results belong to the class of inverse problems of subsurface hydromechanics. The distinctive feature of such problems is that additional information is depending on the capabilities of oilfield experiments. Another factor that should not be overlooked is the existence of errors in the test data. To determine reservoir properties, some inverse problems for parabolic equations were investigated. An approach to solving the inverse problems based on the method of regularization is proposed.

Keywords: iterative approach, inverse problem, parabolic equation, reservoir properties

Procedia PDF Downloads 429
1545 Modification of Magneto-Transport Properties of Ferrimagnetic Mn₄N Thin Films by Ni Substitution and Their Magnetic Compensation

Authors: Taro Komori, Toshiki Gushi, Akihito Anzai, Taku Hirose, Kaoru Toko, Shinji Isogami, Takashi Suemasu

Abstract:

Ferrimagnetic antiperovskite Mn₄₋ₓNiₓN thin film exhibits both small saturation magnetization and rather large perpendicular magnetic anisotropy (PMA) when x is small. Both of them are suitable features for application to current induced domain wall motion devices using spin transfer torque (STT). In this work, we successfully grew antiperovskite 30-nm-thick Mn₄₋ₓNiₓN epitaxial thin films on MgO(001) and STO(001) substrates by MBE in order to investigate their crystalline qualities and magnetic and magneto-transport properties. Crystalline qualities were investigated by X-ray diffraction (XRD). The magnetic properties were measured by vibrating sample magnetometer (VSM) at room temperature. Anomalous Hall effect was measured by physical properties measurement system. Both measurements were performed at room temperature. Temperature dependence of magnetization was measured by VSM-Superconducting quantum interference device. XRD patterns indicate epitaxial growth of Mn₄₋ₓNiₓN thin films on both substrates, ones on STO(001) especially have higher c-axis orientation thanks to greater lattice matching. According to VSM measurement, PMA was observed in Mn₄₋ₓNiₓN on MgO(001) when x ≤ 0.25 and on STO(001) when x ≤ 0.5, and MS decreased drastically with x. For example, MS of Mn₃.₉Ni₀.₁N on STO(001) was 47.4 emu/cm³. From the anomalous Hall resistivity (ρAH) of Mn₄₋ₓNiₓN thin films on STO(001) with the magnetic field perpendicular to the plane, we found out Mr/MS was about 1 when x ≤ 0.25, which suggests large magnetic domains in samples and suitable features for DW motion device application. In contrast, such square curves were not observed for Mn₄₋ₓNiₓN on MgO(001), which we attribute to difference in lattice matching. Furthermore, it’s notable that although the sign of ρAH was negative when x = 0 and 0.1, it reversed positive when x = 0.25 and 0.5. The similar reversal occurred for temperature dependence of magnetization. The magnetization of Mn₄₋ₓNiₓN on STO(001) increases with decreasing temperature when x = 0 and 0.1, while it decreases when x = 0.25. We considered that these reversals were caused by magnetic compensation which occurred in Mn₄₋ₓNiₓN between x = 0.1 and 0.25. We expect Mn atoms of Mn₄₋ₓNiₓN crystal have larger magnetic moments than Ni atoms do. The temperature dependence stated above can be explained if we assume that Ni atoms preferentially occupy the corner sites, and their magnetic moments have different temperature dependence from Mn atoms at the face-centered sites. At the compensation point, Mn₄₋ₓNiₓN is expected to show very efficient STT and ultrafast DW motion with small current density. What’s more, if angular momentum compensation is found, the efficiency will be best optimized. In order to prove the magnetic compensation, X-ray magnetic circular dichroism will be performed. Energy dispersive X-ray spectrometry is a candidate method to analyze the accurate composition ratio of samples.

Keywords: compensation, ferrimagnetism, Mn₄N, PMA

Procedia PDF Downloads 135
1544 Validity of Universe Structure Conception as Nested Vortexes

Authors: Khaled M. Nabil

Abstract:

This paper introduces the Nested Vortexes conception of the universe structure and interprets all the physical phenomena according this conception. The paper first reviews recent physics theories, either in microscopic scale or macroscopic scale, to collect evidence that the space is not empty. But, these theories describe the property of the space medium without determining its structure. Determining the structure of space medium is essential to understand the mechanism that leads to its properties. Without determining the space medium structure, many phenomena; such as electric and magnetic fields, gravity, or wave-particle duality remain uninterpreted. Thus, this paper introduces a conception about the structure of the universe. It assumes that the universe is a medium of ultra-tiny homogeneous particles which are still undiscovered. Like any medium with certain movements, possibly because of a great asymmetric explosion, vortexes have occurred. A vortex condenses the ultra-tiny particles in its center forming a bigger particle, the bigger particles, in turn, could be trapped in a bigger vortex and condense in its center forming a much bigger particle and so on. This conception describes galaxies, stars, protons as particles at different levels. Existing of the particle’s vortexes make the consistency of the speed of light postulate is not true. This conception shows that the vortex motion dynamic agrees with the motion of all the universe particles at any level. An experiment has been carried out to detect the orbiting effect of aggregated vortexes of aligned atoms of a permanent magnet. Based on the described particle’s structure, the gravity force of a particle and attraction between particles as well as charge, electric and magnetic fields and quantum mechanics characteristics are interpreted. All augmented physics phenomena are solved.

Keywords: astrophysics, cosmology, particles’ structure model, particles’ forces

Procedia PDF Downloads 120
1543 Effect of Therapeutic Exercises with or without Positional Release Technique in Treatment of Chronic Mechanical Low Back Pain Patients a Randomized Controlled Trial

Authors: Ghada M. R. Koura, Mohamed N. Mohamed, Ahmed M. F. El Shiwi

Abstract:

Chronic mechanical Low back dysfunction (CMLBD) is the most common problem of the working-age population in modern industrial sociaty; it causes a substantial economic burden due to the wide use of medical services and absence from work. Aim of work: the aim of this study was to investigate the effect of positional release technique on patients with chronic mechanical low back pain. Materials and Methods: Thirty two patients from both sexes were diagnosed with CMLBP, aged 20 to 45 years and were divided randomly into two equal groups; sixteen patients each; group A (control group) received therapeutic exercises that include (Stretch and Strength exercises for back and abdominal muscles). Group B (experimental group) received therapeutic exercises with positional release technique; treatment was applied 3 days/week for 4 weeks. Pain was measured by Visual Analogue Scale, Lumbar range of motion was measured by Inclinometer and Functional disability was measured by Oswestry disability scale. Measurements were taken at two intervals pre-treatment and post-treatment. Results: Data obtained was analyzed via paired and unpaired t-Test. There were statistical differences between the 2 groups, where the experimental group showed greater improvement than control group. Conclusion: Positional release technique is considered as an effective treatment for reducing pain, functional disability and increasing lumbar range of motion in individuals with chronic mechanical low back pain.

Keywords: chronic mechanical low back pain, traditional physical therapy program, positional release technique, randomized controlled trial

Procedia PDF Downloads 598
1542 Multi-Criteria Selection and Improvement of Effective Design for Generating Power from Sea Waves

Authors: Khaled M. Khader, Mamdouh I. Elimy, Omayma A. Nada

Abstract:

Sustainable development is the nominal goal of most countries at present. In general, fossil fuels are the development mainstay of most world countries. Regrettably, the fossil fuel consumption rate is very high, and the world is facing the problem of conventional fuels depletion soon. In addition, there are many problems of environmental pollution resulting from the emission of harmful gases and vapors during fuel burning. Thus, clean, renewable energy became the main concern of most countries for filling the gap between available energy resources and their growing needs. There are many renewable energy sources such as wind, solar and wave energy. Energy can be obtained from the motion of sea waves almost all the time. However, power generation from solar or wind energy is highly restricted to sunny periods or the availability of suitable wind speeds. Moreover, energy produced from sea wave motion is one of the cheapest types of clean energy. In addition, renewable energy usage of sea waves guarantees safe environmental conditions. Cheap electricity can be generated from wave energy using different systems such as oscillating bodies' system, pendulum gate system, ocean wave dragon system and oscillating water column device. In this paper, a multi-criteria model has been developed using Analytic Hierarchy Process (AHP) to support the decision of selecting the most effective system for generating power from sea waves. This paper provides a widespread overview of the different design alternatives for sea wave energy converter systems. The considered design alternatives have been evaluated using the developed AHP model. The multi-criteria assessment reveals that the off-shore Oscillating Water Column (OWC) system is the most appropriate system for generating power from sea waves. The OWC system consists of a suitable hollow chamber at the shore which is completely closed except at its base which has an open area for gathering moving sea waves. Sea wave's motion pushes the air up and down passing through a suitable well turbine for generating power. Improving the power generation capability of the OWC system is one of the main objectives of this research. After investigating the effect of some design modifications, it has been concluded that selecting the appropriate settings of some effective design parameters such as the number of layers of Wells turbine fans and the intermediate distance between the fans can result in significant improvements. Moreover, simple dynamic analysis of the Wells turbine is introduced. Furthermore, this paper strives for comparing the theoretical and experimental results of the built experimental prototype.

Keywords: renewable energy, oscillating water column, multi-criteria selection, Wells turbine

Procedia PDF Downloads 164
1541 Advanced Mouse Cursor Control and Speech Recognition Module

Authors: Prasad Kalagura, B. Veeresh kumar

Abstract:

We constructed an interface system that would allow a similarly paralyzed user to interact with a computer with almost full functional capability. A real-time tracking algorithm is implemented based on adaptive skin detection and motion analysis. The clicking of the mouse is activated by the user's eye blinking through a sensor. The keyboard function is implemented by voice recognition kit.

Keywords: embedded ARM7 processor, mouse pointer control, voice recognition

Procedia PDF Downloads 579
1540 Improving Pneumatic Artificial Muscle Performance Using Surrogate Model: Roles of Operating Pressure and Tube Diameter

Authors: Van-Thanh Ho, Jaiyoung Ryu

Abstract:

In soft robotics, the optimization of fluid dynamics through pneumatic methods plays a pivotal role in enhancing operational efficiency and reducing energy loss. This is particularly crucial when replacing conventional techniques such as cable-driven electromechanical systems. The pneumatic model employed in this study represents a sophisticated framework designed to efficiently channel pressure from a high-pressure reservoir to various muscle locations on the robot's body. This intricate network involves a branching system of tubes. The study introduces a comprehensive pneumatic model, encompassing the components of a reservoir, tubes, and Pneumatically Actuated Muscles (PAM). The development of this model is rooted in the principles of shock tube theory. Notably, the study leverages experimental data to enhance the understanding of the interplay between the PAM structure and the surrounding fluid. This improved interactive approach involves the use of morphing motion, guided by a contraction function. The study's findings demonstrate a high degree of accuracy in predicting pressure distribution within the PAM. The model's predictive capabilities ensure that the error in comparison to experimental data remains below a threshold of 10%. Additionally, the research employs a machine learning model, specifically a surrogate model based on the Kriging method, to assess and quantify uncertainty factors related to the initial reservoir pressure and tube diameter. This comprehensive approach enhances our understanding of pneumatic soft robotics and its potential for improved operational efficiency.

Keywords: pneumatic artificial muscles, pressure drop, morhing motion, branched network, surrogate model

Procedia PDF Downloads 100
1539 Reliability of 2D Motion Analysis System for Sagittal Plane Lower Limb Kinematics during Running

Authors: Seyed Hamed Mousavi, Juha M. Hijmans, Reza Rajabi, Ron Diercks, Johannes Zwerver, Henk van der Worp

Abstract:

Introduction: Running is one of the most popular sports activity among people. Improper sagittal plane ankle, knee and hip kinematics are considered to be associated with the increase of injury risk in runners. Motion assessing smart-phone applications are increasingly used to measure kinematics both in the field and laboratory setting, as they are cheaper, more portable, accessible, and easier to use relative to 3D motion analysis system. The aims of this study are 1) to compare the results of 3D gait analysis system and CE; 2) to evaluate the test-retest and intra-rater reliability of coach’s eye (CE) app for the sagittal plane hip, knee, and ankle angles in the touchdown and toe-off while running. Method: Twenty subjects participated in this study. Sixteen reflective markers and cluster markers were attached to the subject’s body. Subjects were asked to run at a self-selected speed on a treadmill. Twenty-five seconds of running were collected for analyzing kinematics of interest. To measure sagittal plane hip, knee and ankle joint angles at touchdown (TD) and toe off (TO), the mean of first ten acceptable consecutive strides was calculated for each angle. A smartphone (Samsung Note5, android) was placed on the right side of the subject so that whole body was simultaneously filmed with 3D gait system during running. All subjects repeated the task with the same running speed after a short interval of 5 minutes in between. The CE app, installed on the smartphone, was used to measure the sagittal plane hip, knee and ankle joint angles at touchdown and toe off the stance phase. Results: Intraclass correlation coefficient (ICC) was used to assess test-retest and intra-rater reliability. To analyze the agreement between 3D and 2D outcomes, the Bland and Altman plot was used. The values of ICC were for Ankle at TD (TRR=0.8,IRR=0.94), ankle at TO (TRR=0.9,IRR=0.97), knee at TD (TRR=0.78,IRR=0.98), knee at TO (TRR=0.9,IRR=0.96), hip at TD (TRR=0.75,IRR=0.97), hip at TO (TRR=0.87,IRR=0.98). The Bland and Altman plots displaying a mean difference (MD) and ±2 standard deviation of MD (2SDMD) of 3D and 2D outcomes were for Ankle at TD (MD=3.71,+2SDMD=8.19, -2SDMD=-0.77), ankle at TO (MD=-1.27, +2SDMD=6.22, -2SDMD=-8.76), knee at TD (MD=1.48, +2SDMD=8.21, -2SDMD=-5.25), knee at TO (MD=-6.63, +2SDMD=3.94, -2SDMD=-17.19), hip at TD (MD=1.51, +2SDMD=9.05, -2SDMD=-6.03), hip at TO (MD=-0.18, +2SDMD=12.22, -2SDMD=-12.59). Discussion: The ability that the measurements are accurately reproduced is valuable in the performance and clinical assessment of outcomes of joint angles. The results of this study showed that the intra-rater and test-retest reliability of CE app for all kinematics measured are excellent (ICC ≥ 0.75). The Bland and Altman plots display that there are high differences of values for ankle at TD and knee at TO. Measuring ankle at TD by 2D gait analysis depends on the plane of movement. Since ankle at TD mostly occurs in the none-sagittal plane, the measurements can be different as foot progression angle at TD increases during running. The difference in values of the knee at TD can depend on how 3D and the rater detect the TO during the stance phase of running.

Keywords: reliability, running, sagittal plane, two dimensional

Procedia PDF Downloads 202
1538 Effects of Machining Parameters on the Surface Roughness and Vibration of the Milling Tool

Authors: Yung C. Lin, Kung D. Wu, Wei C. Shih, Jui P. Hung

Abstract:

High speed and high precision machining have become the most important technology in manufacturing industry. The surface roughness of high precision components is regarded as the important characteristics of the product quality. However, machining chatter could damage the machined surface and restricts the process efficiency. Therefore, selection of the appropriate cutting conditions is of importance to prevent the occurrence of chatter. In addition, vibration of the spindle tool also affects the surface quality, which implies the surface precision can be controlled by monitoring the vibration of the spindle tool. Based on this concept, this study was aimed to investigate the influence of the machining conditions on the surface roughness and the vibration of the spindle tool. To this end, a series of machining tests were conducted on aluminum alloy. In tests, the vibration of the spindle tool was measured by using the acceleration sensors. The surface roughness of the machined parts was examined using white light interferometer. The response surface methodology (RSM) was employed to establish the mathematical models for predicting surface finish and tool vibration, respectively. The correlation between the surface roughness and spindle tool vibration was also analyzed by ANOVA analysis. According to the machining tests, machined surface with or without chattering was marked on the lobes diagram as the verification of the machining conditions. Using multivariable regression analysis, the mathematical models for predicting the surface roughness and tool vibrations were developed based on the machining parameters, cutting depth (a), feed rate (f) and spindle speed (s). The predicted roughness is shown to agree well with the measured roughness, an average percentage of errors of 10%. The average percentage of errors of the tool vibrations between the measurements and the predictions of mathematical model is about 7.39%. In addition, the tool vibration under various machining conditions has been found to have a positive influence on the surface roughness (r=0.78). As a conclusion from current results, the mathematical models were successfully developed for the predictions of the surface roughness and vibration level of the spindle tool under different cutting condition, which can help to select appropriate cutting parameters and to monitor the machining conditions to achieve high surface quality in milling operation.

Keywords: machining parameters, machining stability, regression analysis, surface roughness

Procedia PDF Downloads 232
1537 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads

Authors: Raja Umer Sajjad, Chang Hee Lee

Abstract:

Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.

Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters

Procedia PDF Downloads 241
1536 Discrimination and Classification of Vestibular Neuritis Using Combined Fisher and Support Vector Machine Model

Authors: Amine Ben Slama, Aymen Mouelhi, Sondes Manoubi, Chiraz Mbarek, Hedi Trabelsi, Mounir Sayadi, Farhat Fnaiech

Abstract:

Vertigo is a sensation of feeling off balance; the cause of this symptom is very difficult to interpret and needs a complementary exam. Generally, vertigo is caused by an ear problem. Some of the most common causes include: benign paroxysmal positional vertigo (BPPV), Meniere's disease and vestibular neuritis (VN). In clinical practice, different tests of videonystagmographic (VNG) technique are used to detect the presence of vestibular neuritis (VN). The topographical diagnosis of this disease presents a large diversity in its characteristics that confirm a mixture of problems for usual etiological analysis methods. In this study, a vestibular neuritis analysis method is proposed with videonystagmography (VNG) applications using an estimation of pupil movements in the case of an uncontrolled motion to obtain an efficient and reliable diagnosis results. First, an estimation of the pupil displacement vectors using with Hough Transform (HT) is performed to approximate the location of pupil region. Then, temporal and frequency features are computed from the rotation angle variation of the pupil motion. Finally, optimized features are selected using Fisher criterion evaluation for discrimination and classification of the VN disease.Experimental results are analyzed using two categories: normal and pathologic. By classifying the reduced features using the Support Vector Machine (SVM), 94% is achieved as classification accuracy. Compared to recent studies, the proposed expert system is extremely helpful and highly effective to resolve the problem of VNG analysis and provide an accurate diagnostic for medical devices.

Keywords: nystagmus, vestibular neuritis, videonystagmographic system, VNG, Fisher criterion, support vector machine, SVM

Procedia PDF Downloads 139
1535 Comparison of Allowable Stress Method and Time History Response Analysis for Seismic Design of Buildings

Authors: Sayuri Inoue, Naohiro Nakamura, Tsubasa Hamada

Abstract:

The seismic design method of buildings is classified into two types: static design and dynamic design. The static design is a design method that exerts static force as seismic force and is a relatively simple design method created based on the experience of seismic motion in the past 100 years. At present, static design is used for most of the Japanese buildings. Dynamic design mainly refers to the time history response analysis. It is a comparatively difficult design method that input the earthquake motion assumed in the building model and examine the response. Currently, it is only used for skyscrapers and specific buildings. In the present design standard in Japan, it is good to use either the design method of the static design and the dynamic design in the medium and high-rise buildings. However, when actually designing middle and high-rise buildings by two kinds of design methods, the relatively simple static design method satisfies the criteria, but in the case of a little difficult dynamic design method, the criterion isn't often satisfied. This is because the dynamic design method was built with the intention of designing super high-rise buildings. In short, higher safety is required as compared with general buildings, and criteria become stricter. The authors consider applying the dynamic design method to general buildings designed by the static design method so far. The reason is that application of the dynamic design method is reasonable for buildings that are out of the conventional standard structural form such as emphasizing design. For the purpose, it is important to compare the design results when the criteria of both design methods are arranged side by side. In this study, we performed time history response analysis to medium-rise buildings that were actually designed with allowable stress method. Quantitative comparison between static design and dynamic design was conducted, and characteristics of both design methods were examined.

Keywords: buildings, seismic design, allowable stress design, time history response analysis, Japanese seismic code

Procedia PDF Downloads 157
1534 The Effects of Impact Forces and Kinematics of Two Different Stance Position at Straight Punch Techniques in Boxing

Authors: Bergun Meric Bingul, Cigdem Bulgan, Ozlem Tore, Mensure Aydin, Erdal Bal

Abstract:

The aim of the study was to compare the effects of impact forces and some kinematic parameters with two different straight punch stance positions in boxing. 9 elite boxing athletes from the Turkish National Team (mean age± SD 19.33±2.11 years, mean height 174.22±3.79 cm, mean weight 66.0±6.62 kg) participated in this study as voluntarily. Boxing athletes performed one trial in straight punch technique for each two different stance positions (orthodox and southpaw stances) at sandbag. The trials were recorded at a frequency of 120Hz using eight synchronized high-speed cameras (Oqus 7+), which were placed, approximately at right- angles to one another. The three-dimensional motion analysis was performed with a Motion Capture System (Qualisys, Sweden). Data was transferred to Windows-based data acquisition software, which was QTM (Qualisys Track Manager). 11 segment models were used for determination of the kinematic variables (Calf, leg, punch, upperarm, lowerarm, trunk). Also, the sandbag was markered for calculation of the impact forces. Wand calibration method (with T stick) was used for field calibration. The mean velocity and acceleration of the punch; mean acceleration of the sandbag and angles of the trunk, shoulder, hip and knee were calculated. Stance differences’ data were compared with Wilcoxon test for using SPSS 20.0 program. According to the results, there were statistically significant differences found in trunk angle on the sagittal plane (yz) (p<0.05). There was a significant difference also found in sandbag acceleration and impact forces between stance positions (p < 0.05). Boxing athletes achieved more impact forces and accelerations in orthodox stance position. It is recommended that to use an orthodox stance instead of southpaw stance in straight punch technique especially for creating more impact forces.

Keywords: boxing, impact force, kinematics, straight punch, orthodox, southpaw

Procedia PDF Downloads 326
1533 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects

Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm

Abstract:

Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.

Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology

Procedia PDF Downloads 181
1532 The Effect of Low Power Laser on CK and Some of Markers Delayed Onset Muscle Soreness (DOMS)

Authors: Bahareh Yazdanparast Chaharmahali

Abstract:

The study showed effect of low power laser therapy on knee range of motion (flexion and extension), resting angle of knee joint, knee circumference and rating of delayed onset muscle soreness induced pain, 24 and 48 hours after eccentric training of knee flexor muscle (hamstring muscle). We investigate the effects of pulsed ultrasound on swelling, relaxed, flexion and extension knee angle and pain. 20 volunteers among girl students of college voluntary participated in this research. After eccentric training, subjects were randomly divided into two groups, control and laser therapy. In day 1 and in order to induce delayed onset muscle soreness, subjects eccentrically trained their knee flexor muscles. In day 2, subjects were randomly divided into two groups: control and low power laser therapy. 24 and 48 hours after eccentric training. Variables (knee flexion and extension, srang of motion, resting knee joint angle and knee circumferences) were measured and analyzed. Data are reported as means ± standard error (SE) and repeated measured was used to assess differences within groups. Methods of treatment (low power laser therapy) have significant effects on delayed onset muscle soreness markers. 24 and 48 hours after training a significant difference was observed between mean pains of 2 groups. This difference was significant between low power laser therapy and C groups. The Bonferroni post hock is significant. Low power laser therapy trophy as used in this study did significantly diminish the effects of delayed – onset muscle soreness on swelling, relaxed – knee extension and flexion angle.

Keywords: creatine kinase, DOMS, eccentric training, low power laser

Procedia PDF Downloads 246
1531 Botulinum Toxin type A for Lower Limb Lengthening and Deformity Correction: A Systematic Review and Meta-analysis

Authors: Jawaher F. Alsharef, Abdullah A. Ghaddaf, Mohammed S. Alomari, Abdullah A. Al Qurashi, Ahmed S. Abdulhamid, Mohammed S. Alshehri, Majed Alosaimi

Abstract:

Botulinum toxin type A (BTX-A) is the most popular therapeutic agent for muscle relaxation and pain control. Lately, BTX-A injection received great interest as a part of multimodal pain management for lower limb lengthening and deformity correction. This systematic review aimed to determine the role of BTX-A injection in pain management for during lower limb lengthening and/or deformity correction. We searched Medline, Embase, and CENTRAL. We included randomized controlled trials (RCTs) that compared the BTX-A injection to placebo for individuals undergoing lower limb lengthening and/or deformity correction. We sought to evaluate the following outcomes: pain on visual analogue scale (VAS), range of motion parameters, average opioid consumption, and adverse events. The standardized mean difference (SMD) was used to represent continuous outcomes while risk ratio (RR) was used to represent dichotomous outcomes. A total of 4 RCTs that enrolled 257 participants (337 limbs) deemed eligible. Adjuvant BTX-A injection showed a significant reduction in post-operative pain compared to placebo (SMD=–0.28, 95% CI –0.53 to –0.04). No difference was found between BTX-A injection and placebo in terms of range of motion parameters, average opioid consumption, or adverse events after surgical limb lengthening and/or deformity correction (RR= 0.77, 95% CI –0.58 to 1.03). Conclusions: Adjuvant BTX-A injection conferred a discernible reduction in post-operative pain during surgical limb lengthening and/or deformity without increasing the risk of adverse events.

Keywords: botulinum toxin type A, limb lengthening, distraction osteogenesis, deformity correction, pain management

Procedia PDF Downloads 142
1530 Study on Novel Reburning Process for NOx Reduction by Oscillating Injection of Reburn Fuel

Authors: Changyeop Lee, Sewon Kim, Jongho Lee

Abstract:

Reburning technology has been developed to adopt various commercial combustion systems. Fuel lean reburning is an advanced reburning method to reduce NOx economically without using burnout air, however it is not easy to get high NOx reduction efficiency. In the fuel lean reburning system, the localized fuel rich eddies are used to establish partial fuel rich regions so that the NOx can react with hydrocarbon radical restrictively. In this paper, a new advanced reburning method which supplies reburn fuel with oscillatory motion is introduced to increase NOx reduction rate effectively. To clarify whether forced oscillating injection of reburn fuel can effectively reduce NOx emission, experimental tests were conducted in vertical combustion furnace. Experiments were performed in flames stabilized by a gas burner, which was mounted at the bottom of the furnace. The natural gas is used as both main and reburn fuel and total thermal input is about 40kW. The forced oscillating injection of reburn fuel is realized by electronic solenoid valve, so that fuel rich region and fuel lean region is established alternately. In the fuel rich region, NOx is converted to N2 by reburning reaction, however unburned hydrocarbon and CO is oxidized in fuel lean zone and mixing zone at downstream where slightly fuel lean region is formed by mixing of two regions. This paper reports data on flue gas emissions and temperature distribution in the furnace for a wide range of experimental conditions. All experimental data has been measured at steady state. The NOx reduction rate increases up to 41% by forced oscillating reburn motion. The CO emissions were shown to be kept at very low level. And this paper makes clear that in order to decrease NOx concentration in the exhaust when oscillating reburn fuel injection system is adopted, the control of factors such as frequency and duty ratio is very important.

Keywords: NOx, CO, reburning, pollutant

Procedia PDF Downloads 288
1529 On the Quantum Behavior of Nanoparticles: Quantum Theory and Nano-Pharmacology

Authors: Kurudzirayi Robson Musikavanhu

Abstract:

Nanophase particles exhibit quantum behavior by virtue of their small size, being particles of gamma to x-ray wavelength [atomic range]. Such particles exhibit high frequencies, high energy/photon, high penetration power, high ionization power [atomic behavior] and are stable at low energy levels as opposed to bulk phase matter [macro particles] which exhibit higher wavelength [radio wave end] properties, hence lower frequency, lower energy/photon, lower penetration power, lower ionizing power and are less stable at low temperatures. The ‘unique’ behavioral motion of Nano systems will remain a mystery as long as quantum theory remains a mystery, and for pharmacology, pharmacovigilance profiling of Nano systems becomes virtually impossible. Quantum theory is the 4 – 3 – 5 electromagnetic law of life and life motion systems on planet earth. Electromagnetic [wave-particle] properties of all particulate matter changes as mass [bulkiness] changes from one phase to the next [Nano-phase to micro-phase to milli-phase to meter-phase to kilometer phase etc.] and the subsequent electromagnetic effect of one phase particle on bulk matter [different phase] changes from one phase to another. All matter exhibit electromagnetic properties [wave-particle duality] in behavior and the lower the wavelength [and the lesser the bulkiness] the higher the gamma ray end properties exhibited and the higher the wavelength [and the greater the bulkiness], the more the radio-wave end properties are exhibited. Quantum theory is the 4 [moon] – 3[sun] – [earth] 5 law of the Electromagnetic spectrum [solar system]. 4 + 3 = 7; 4 + 3 + 5 = 12; 4 * 3 * 5 = 60; 42 + 32 = 52; 43 + 33 + 53 = 63. Quantum age is overdue.

Keywords: electromagnetic solar system, nano-material, nano pharmacology, pharmacovigilance, quantum theory

Procedia PDF Downloads 452
1528 Mixing Enhancement with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure Micromixer Using Different Mixing Fluids

Authors: Ayalew Yimam Ali

Abstract:

The T-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the T-junction microchannel can be difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The newly developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the T-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal, triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on the top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the T-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.

Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement.

Procedia PDF Downloads 22
1527 Comparison of Equivalent Linear and Non-Linear Site Response Model Performance in Kathmandu Valley

Authors: Sajana Suwal, Ganesh R. Nhemafuki

Abstract:

Evaluation of ground response under earthquake shaking is crucial in geotechnical earthquake engineering. Damage due to seismic excitation is mainly correlated to local geological and geotechnical conditions. It is evident from the past earthquakes (e.g. 1906 San Francisco, USA, 1923 Kanto, Japan) that the local geology has strong influence on amplitude and duration of ground motions. Since then significant studies has been conducted on ground motion amplification revealing the importance of influence of local geology on ground. Observations from the damaging earthquakes (e.g. Nigata and San Francisco, 1964; Irpinia, 1980; Mexico, 1985; Kobe, 1995; L’Aquila, 2009) divulged that non-uniform damage pattern, particularly in soft fluvio-lacustrine deposit is due to the local amplification of seismic ground motion. Non-uniform damage patterns are also observed in Kathmandu Valley during 1934 Bihar Nepal earthquake and recent 2015 Gorkha earthquake seemingly due to the modification of earthquake ground motion parameters. In this study, site effects resulting from amplification of soft soil in Kathmandu are presented. A large amount of subsoil data was collected and used for defining the appropriate subsoil model for the Kathamandu valley. A comparative study of one-dimensional total-stress equivalent linear and non-linear site response is performed using four strong ground motions for six sites of Kathmandu valley. In general, one-dimensional (1D) site-response analysis involves the excitation of a soil profile using the horizontal component and calculating the response at individual soil layers. In the present study, both equivalent linear and non-linear site response analyses were conducted using the computer program DEEPSOIL. The results show that there is no significant deviation between equivalent linear and non-linear site response models until the maximum strain reaches to 0.06-0.1%. Overall, it is clearly observed from the results that non-linear site response model perform better as compared to equivalent linear model. However, the significant deviation between two models is resulted from other influencing factors such as assumptions made in 1D site response, lack of accurate values of shear wave velocity and nonlinear properties of the soil deposit. The results are also presented in terms of amplification factors which are predicted to be around four times more in case of non-linear analysis as compared to equivalent linear analysis. Hence, the nonlinear behavior of soil prevails the urgent need of study of dynamic characteristics of the soft soil deposit that can specifically represent the site-specific design spectra for the Kathmandu valley for building resilient structures from future damaging earthquakes.

Keywords: deep soil, equivalent linear analysis, non-linear analysis, site response

Procedia PDF Downloads 292
1526 Operative versus Non-Operative Treatment of Scaphoid Non-Union in Children: A Case Presentation and Review of the Literature

Authors: Ilja Käch, Abdul R. Jandali, Nadja Zechmann-Müller

Abstract:

Introduction: We discuss the treatment of two young male patients suffering from scaphoid non-union after a traumatic scaphoid fracture. The currently propagated techniques for treating a scaphoid non-union in children are either the operative reconstruction of the scaphoid or the conservative treatment with splinting in a scaphoid cast. Cases: In the first case, we operated on a 13 years old male patient with a posttraumatic scaphoid non-union in the middle third with a humpback deformity. We resected the middle third of the scaphoid and grafted the defect with an iliac crest bone, and the DISI-Deformity was reduced. Fixation was performed with K-Wires and immobilisation in a scaphoid cast. In the second case a 13 years old male patient also with a posttraumatic scaphoid non-union in the middle third and humpback deformity, DISI-deformity, was treated conservatively. Immobilisation in a scaphoid cast for four months was performed. Results: Operative: One year postoperatively the patient achieved a painless free arc of motion. Flexion/Extension 70/0/60°, Radial-/Ulnarduction 30/0/30° and Pro-/Supination 90/0/90°. The computer tomogram showed complete consolidation and bony fusion of the iliac crest bone. Conservative: Six to eight months after conservative treatment the patient demonstrated painless motion and AROM Flexion/Extension 80/0/80°, Radial-/Ulnarduction and Pro-/Supination in maximum range. Complete consolidation in the computer tomogram with persistent humpback- and DISI deformity. Conclusion: In the literature, both techniques are described, either the operative scaphoid reconstruction or the conservative treatment with splinting. In our cases, both the operative and conservative treatments showed comparable good results. However, the humpback- and DISI deformity can only be addressed with a surgical approach.

Keywords: scaphoid, non-union, trauma, operative vs. non operative

Procedia PDF Downloads 77
1525 Studies on Space-Based Laser Targeting System for the Removal of Orbital Space Debris

Authors: Krima M. Rohela, Raja Sabarinath Sundaralingam

Abstract:

Humans have been launching rockets since the beginning of the space age in the late 1950s. We have come a long way since then, and the success rate for the launch of rockets has increased considerably. With every successful launch, there is a large amount of junk or debris which is released into the upper layers of the atmosphere. Space debris has been a huge concern for a very long time now. This includes the rocket shells released from the launch and the parts of defunct satellites. Some of this junk will come to fall towards the Earth and burn in the atmosphere. But most of the junk goes into orbit around the Earth, and they remain in orbits for at least 100 years. This can cause a lot of problems to other functioning satellites and may affect the future manned missions to space. The main concern of the space-debris is the increase in space activities, which leads to risks of collisions if not taken care of soon. These collisions may result in what is known as Kessler Syndrome. This debris can be removed by a space-based laser targeting system. Hence, the matter is investigated and discussed. The first step in this involves launching a satellite with a high-power laser device into space, above the debris belt. Then the target material is ablated with a focussed laser beam. This step of the process is highly dependent on the attitude and orientation of the debris with respect to the Earth and the device. The laser beam will cause a jet of vapour and plasma to be expelled from the material. Hence, the force is applied in the opposite direction, and in accordance with Newton’s third law of motion, this will cause the material to move towards the Earth and get pulled down due to gravity, where it will get disintegrated in the upper layers of the atmosphere. The larger pieces of the debris can be directed towards the oceans. This method of removal of the orbital debris will enable safer passage for future human-crewed missions into space.

Keywords: altitude, Kessler syndrome, laser ablation, Newton’s third law of motion, satellites, Space debris

Procedia PDF Downloads 149
1524 The Three-Dimensional Kinematics of the Sprint Start in Young Elite Sprinters

Authors: Saeed Ilbeigi, Bart Van Gheluwe

Abstract:

The purpose of this study was to identify the three-dimensional kinematics of the sprint start during the start phase of the sprint. The purpose of this study was to identify the three-dimensional kinematics of the sprint start during the start phase of the sprint. Moreover, the effect of anthropometrical factors such as skeletal muscle mass, thigh girth, and calf girth also were considered on the kinematics of the sprint start. Among all young sprinters involved in the national Belgium league, sixty sprinters (boys: 14.7 ± 1.8 years and girls: 14.8±1.5 years) were randomly selected. The kinematics data of the sprint start were collected with a Vicon® 620 motion analysis system equipped with 12 infrared cameras running at 250 Hz and running the Vicon Data Station software. For statistical analysis, T-tests and ANOVA׳s with Scheffé post hoc test were used and the significant level was set as p≤0.05. The results showed that the angular positions of the lower joints of the young sprinters in the set position were comparable with adult figures from literature, however, with a greater range of joint extension. The most significant difference between boys and girls was found in the set position, where the boys presented a more dorsiflexed ankle. No further gender effect was observed during the leaving the blocks and contact phase. The sprinters with a higher age, skeletal muscle mass, thigh girth, and calf girth displayed a better angular position of the lower joints (e.g. ankle, knee, hip) in the set position, a more optimal angular position for the foot and knee for absorbing impact forces at foot contact and finally a higher range of flexion/extension motion to produce force and power when leaving the blocks.

Keywords: anthropometry, kinematics, sprint start, young elite sprinters

Procedia PDF Downloads 229