Search results for: conventional computing
936 A Hybrid-Evolutionary Optimizer for Modeling the Process of Obtaining Bricks
Authors: Marius Gavrilescu, Sabina-Adriana Floria, Florin Leon, Silvia Curteanu, Costel Anton
Abstract:
Natural sciences provide a wide range of experimental data whose related problems require study and modeling beyond the capabilities of conventional methodologies. Such problems have solution spaces whose complexity and high dimensionality require correspondingly complex regression methods for proper characterization. In this context, we propose an optimization method which consists in a hybrid dual optimizer setup: a global optimizer based on a modified variant of the popular Imperialist Competitive Algorithm (ICA), and a local optimizer based on a gradient descent approach. The ICA is modified such that intermediate solution populations are more quickly and efficiently pruned of low-fitness individuals by appropriately altering the assimilation, revolution and competition phases, which, combined with an initialization strategy based on low-discrepancy sampling, allows for a more effective exploration of the corresponding solution space. Subsequently, gradient-based optimization is used locally to seek the optimal solution in the neighborhoods of the solutions found through the modified ICA. We use this combined approach to find the optimal configuration and weights of a fully-connected neural network, resulting in regression models used to characterize the process of obtained bricks using silicon-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. Thus, the purpose of our approach is to determine by simulation the working conditions, including the manufacturing mix recipe with the addition of different materials, to minimize the emissions represented by CO and CH4. Our approach determines regression models which perform significantly better than those found using the traditional ICA for the aforementioned problem, resulting in better convergence and a substantially lower error.Keywords: optimization, biologically inspired algorithm, regression models, bricks, emissions
Procedia PDF Downloads 82935 Precise Spatially Selective Photothermolysis Skin Treatment by Multiphoton Absorption
Authors: Yimei Huang, Harvey Lui, Jianhua Zhao, Zhenguo Wu, Haishan Zeng
Abstract:
Conventional laser treatment of skin diseases and cosmetic surgery is based on the principle of one-photon absorption selective photothermolysis which relies strongly on the difference in the light absorption between the therapeutic target and its surrounding tissue. However, when the difference in one-photon absorption is not sufficient, collateral damage would occur due to indiscriminate and nonspecific tissue heating. To overcome this problem, we developed a spatially selective photothermolysis method based on multiphoton absorption in which the heat generation is restricted to the focal point of a tightly focused near-infrared femtosecond laser beam aligned with the target of interest. A multimodal optical microscope with co-registered reflectance confocal imaging (RCM), two-photon fluorescence imaging (TPF), and second harmonic generation imaging (SHG) capabilities was used to perform and monitor the spatially selective photothermolysis. Skin samples excised from the shaved backs of euthanized NODSCID mice were used in this study. Treatments were performed by focusing and scaning the laser beam in the dermis with a 50µm×50µm target area. Treatment power levels of 200 mW to 400 mW and modulated pulse trains of different duration and period were experimented. Different treatment parameters achieved different degrees of spatial confinement of tissue alterations as visualized by 3-D RCM/TPF/SHG imaging. At 200 mW power level, 0.1 s pulse train duration, 4.1 s pulse train period, the tissue damage was found to be restricted precisely to the 50µm×50µm×10µm volume, where the laser focus spot had scanned through. The overlying epidermis/dermis tissue and the underneath dermis tissue were intact although there was light passing through these regions.Keywords: multiphoton absorption photothermolysis, reflectance confocal microscopy, second harmonic generation microscopy, spatially selective photothermolysis, two-photon fluorescence microscopy
Procedia PDF Downloads 515934 A Comparative Study of Linearly Graded and without Graded Photonic Crystal Structure
Authors: Rajeev Kumar, Angad Singh Kushwaha, Amritanshu Pandey, S. K. Srivastava
Abstract:
Photonic crystals (PCs) have attracted much attention due to its electromagnetic properties and potential applications. In PCs, there is certain range of wavelength where electromagnetic waves are not allowed to pass are called photonic band gap (PBG). A localized defect mode will appear within PBG, due to change in the interference behavior of light, when we create a defect in the periodic structure. We can also create different types of defect structures by inserting or removing a layer from the periodic layered structure in two and three-dimensional PCs. We can design microcavity, waveguide, and perfect mirror by creating a point defect, line defect, and palanar defect in two and three- dimensional PC structure. One-dimensional and two-dimensional PCs with defects were reported theoretically and experimentally by Smith et al.. in conventional photonic band gap structure. In the present paper, we have presented the defect mode tunability in tilted non-graded photonic crystal (NGPC) and linearly graded photonic crystal (LGPC) using lead sulphide (PbS) and titanium dioxide (TiO2) in the infrared region. A birefringent defect layer is created in NGPC and LGPC using potassium titany phosphate (KTP). With the help of transfer matrix method, the transmission properties of proposed structure is investigated for transverse electric (TE) and transverse magnetic (TM) polarization. NGPC and LGPC without defect layer is also investigated. We have found that a photonic band gap (PBG) arises in the infrared region. An additional defect layer of KTP is created in NGPC and LGPC structure. We have seen that an additional transmission mode appers in PBG region. It is due to the addition of defect layer. We have also seen the effect, linear gradation in thickness, angle of incidence, tilt angle, and thickness of defect layer, on PBG and additional transmission mode. We have observed that the additional transmission mode and PBG can be tuned by changing the above parameters. The proposed structure may be used as channeled filter, optical switches, monochromator, and broadband optical reflector.Keywords: defect modes, graded photonic crystal, photonic crystal, tilt angle
Procedia PDF Downloads 376933 Attenuation of Endotoxin Induced Hepatotoxicity by Dexamethasone, Melatonin and Pentoxifylline in White Albino Mice: A Comparative Study
Authors: Ammara Khan
Abstract:
Sepsis is characterized by an overwhelming surge of cytokines and oxidative stress to one of many factors, gram-negative bacteria commonly implicated. Despite major expansion and elaboration of sepsis pathophysiology and therapeutic approach; death rate remains very high in septic patients due to multiple organ damages including hepatotoxicity.The present study was aimed to ascertain the adequacy of three different drugs delivered separately and collectively- low dose steroid-dexamethasone (3mg/kg i.p) ,antioxidant-melatonin(10 mg/kg i.p) ,and phosphodiesterases inhibitor - pentoxifylline (75 mg/kg i.p)in endotoxin-induced hepatotoxicity in mice. Endotoxin/lipopolysaccharides induced hepatotoxicity was reproduced in mice by giving lipopolysaccharide of serotype E.Coli intraperitoneally. The preventive role was questioned by giving the experimental agent half an hour prior to LPS injection whereas the therapeutic potential of the experimental agent was searched out via post-LPS delivering. The extent of liver damage was adjudged via serum alanine aminotransferases (ALT) and aspartate aminotransferase (AST) estimation along with a histopathological examination of liver tissue. Dexamethasone is given before (Group 3) and after LPS (group 4) significantly attenuated LPS generated liver injury.Pentoxifylline generated similar results and serum ALT; AST histological alteration abated considerably (p≤ 0.05) both in animals subjected to pentoxifylline pre (Group 5) and post-treatment(Group 6). Melatonin was also prosperous in aversion (Group 7) and curation (Group 8) of LPS invoked hepatotoxicity as evident by lessening of augmented ALT (≤0.01) and AST (≤0.01) along with restoration of pathological changes in liver sections (p≤0.05). Combination therapies with dexamethasone in conjunction with melatonin (Group 9), dexamethasone together with pentoxifylline (Group 10), and pentoxifylline along with melatonin (Group 11) after LPS administration tapered LPS evoked hepatic dysfunction statistically considerably. In conclusion, both melatonin and pentoxifylline set up promising results in endotoxin-induced hepatotoxicity and can be used therapeutic adjuncts to conventional treatment strategies in sepsis-induced liver failure.Keywords: endotoxin/lipopolysacchride, dexamethasone, hepatotoxicity, melatonin, pentoxifylline
Procedia PDF Downloads 280932 Numerical Modeling and Prediction of Nanoscale Transport Phenomena in Vertically Aligned Carbon Nanotube Catalyst Layers by the Lattice Boltzmann Simulation
Authors: Seungho Shin, Keunwoo Choi, Ali Akbar, Sukkee Um
Abstract:
In this study, the nanoscale transport properties and catalyst utilization of vertically aligned carbon nanotube (VACNT) catalyst layers are computationally predicted by the three-dimensional lattice Boltzmann simulation based on the quasi-random nanostructural model in pursuance of fuel cell catalyst performance improvement. A series of catalyst layers are randomly generated with statistical significance at the 95% confidence level to reflect the heterogeneity of the catalyst layer nanostructures. The nanoscale gas transport phenomena inside the catalyst layers are simulated by the D3Q19 (i.e., three-dimensional, 19 velocities) lattice Boltzmann method, and the corresponding mass transport characteristics are mathematically modeled in terms of structural properties. Considering the nanoscale reactant transport phenomena, a transport-based effective catalyst utilization factor is defined and statistically analyzed to determine the structure-transport influence on catalyst utilization. The tortuosity of the reactant mass transport path of VACNT catalyst layers is directly calculated from the streaklines. Subsequently, the corresponding effective mass diffusion coefficient is statistically predicted by applying the pre-estimated tortuosity factors to the Knudsen diffusion coefficient in the VACNT catalyst layers. The statistical estimation results clearly indicate that the morphological structures of VACNT catalyst layers reduce the tortuosity of reactant mass transport path when compared to conventional catalyst layer and significantly improve consequential effective mass diffusion coefficient of VACNT catalyst layer. Furthermore, catalyst utilization of the VACNT catalyst layer is substantially improved by enhanced mass diffusion and electric current paths despite the relatively poor interconnections of the ion transport paths.Keywords: Lattice Boltzmann method, nano transport phenomena, polymer electrolyte fuel cells, vertically aligned carbon nanotube
Procedia PDF Downloads 201931 Identification of Blood Biomarkers Unveiling Early Alzheimer's Disease Diagnosis Through Single-Cell RNA Sequencing Data and Autoencoders
Authors: Hediyeh Talebi, Shokoofeh Ghiam, Changiz Eslahchi
Abstract:
Traditionally, Alzheimer’s disease research has focused on genes with significant fold changes, potentially neglecting subtle but biologically important alterations. Our study introduces an integrative approach that highlights genes crucial to underlying biological processes, regardless of their fold change magnitude. Alzheimer's Single-cell RNA-seq data related to the peripheral blood mononuclear cells (PBMC) was extracted from the Gene Expression Omnibus (GEO). After quality control, normalization, scaling, batch effect correction, and clustering, differentially expressed genes (DEGs) were identified with adjusted p-values less than 0.05. These DEGs were categorized based on cell-type, resulting in four datasets, each corresponding to a distinct cell type. To distinguish between cells from healthy individuals and those with Alzheimer's, an adversarial autoencoder with a classifier was employed. This allowed for the separation of healthy and diseased samples. To identify the most influential genes in this classification, the weight matrices in the network, which includes the encoder and classifier components, were multiplied, and focused on the top 20 genes. The analysis revealed that while some of these genes exhibit a high fold change, others do not. These genes, which may be overlooked by previous methods due to their low fold change, were shown to be significant in our study. The findings highlight the critical role of genes with subtle alterations in diagnosing Alzheimer's disease, a facet frequently overlooked by conventional methods. These genes demonstrate remarkable discriminatory power, underscoring the need to integrate biological relevance with statistical measures in gene prioritization. This integrative approach enhances our understanding of the molecular mechanisms in Alzheimer’s disease and provides a promising direction for identifying potential therapeutic targets.Keywords: alzheimer's disease, single-cell RNA-seq, neural networks, blood biomarkers
Procedia PDF Downloads 66930 In-vitro Metabolic Fingerprinting Using Plasmonic Chips by Laser Desorption/Ionization Mass Spectrometry
Authors: Vadanasundari Vedarethinam, Kun Qian
Abstract:
The metabolic analysis is more distal over proteomics and genomics engaging in clinics and needs rationally distinct techniques, designed materials, and device for clinical diagnosis. Conventional techniques such as spectroscopic techniques, biochemical analyzers, and electrochemical have been used for metabolic diagnosis. Currently, there are four major challenges including (I) long-term process in sample pretreatment; (II) difficulties in direct metabolic analysis of biosamples due to complexity (III) low molecular weight metabolite detection with accuracy and (IV) construction of diagnostic tools by materials and device-based platforms for real case application in biomedical applications. Development of chips with nanomaterial is promising to address these critical issues. Mass spectroscopy (MS) has displayed high sensitivity and accuracy, throughput, reproducibility, and resolution for molecular analysis. Particularly laser desorption/ ionization mass spectrometry (LDI MS) combined with devices affords desirable speed for mass measurement in seconds and high sensitivity with low cost towards large scale uses. We developed a plasmonic chip for clinical metabolic fingerprinting as a hot carrier in LDI MS by series of chips with gold nanoshells on the surface through controlled particle synthesis, dip-coating, and gold sputtering for mass production. We integrated the optimized chip with microarrays for laboratory automation and nanoscaled experiments, which afforded direct high-performance metabolic fingerprinting by LDI MS using 500 nL of serum, urine, cerebrospinal fluids (CSF) and exosomes. Further, we demonstrated on-chip direct in-vitro metabolic diagnosis of early-stage lung cancer patients using serum and exosomes without any pretreatment or purifications. To our best knowledge, this work initiates a bionanotechnology based platform for advanced metabolic analysis toward large-scale diagnostic use.Keywords: plasmonic chip, metabolic fingerprinting, LDI MS, in-vitro diagnostics
Procedia PDF Downloads 162929 Reproducibility of Shear Strength Parameters Determined from CU Triaxial Tests: Evaluation of Results from Regression of Different Failure Stress Combinations
Authors: Henok Marie Shiferaw, Barbara Schneider-Muntau
Abstract:
Test repeatability and data reproducibility are a concern in many geotechnical laboratory tests due to inherent soil variability, inhomogeneous sample preparation and measurement inaccuracy. Test results on comparable test specimens vary to a considerable extent. Thus, also the derived shear strength parameters from triaxial tests are affected. In this contribution, we present the reproducibility of effective shear strength parameters from consolidated undrained triaxial tests on plain soil and cement-treated soil specimens. Six remolded test specimens were prepared for the plain soil and for the cement-treated soil. Conventional three levels of consolidation pressure testing were considered with an effective consolidation pressure of 100 kPa, 200 kPa and 300 kPa, respectively. At each effective consolidation pressure, two tests were done on comparable test specimens. Focus was laid on the same mean dry density and same water content during sample preparation for the two specimens. The cement-treated specimens were tested after 28 days of curing. Shearing of test specimens was carried out at a deformation rate of 0.4 mm/min after sample saturation at a back pressure of 900 kPa, followed by consolidation. The effective peak and residual shear strength parameters were then estimated from regression analysis of 21 different combinations of the failure stresses from the six tests conducted for both the plain soil and cement-treated soil samples. The 21 different stress combinations were constructed by picking three, four, five and six failure tresses at once at different combinations. Results indicate that the effective shear strength parameters estimated from the regression of different combinations of the failure stresses vary. Effective critical friction angle was found to be more consistent than effective peak friction angle with a smaller standard deviation. The reproducibility of the shear strength parameters for the cement-treated specimens was even lower than that of the untreated specimens.Keywords: shear strength parameters, test repeatability, data reproducibility, triaxial soil testing, cement improvement of soils
Procedia PDF Downloads 33928 Transverse Momentum Dependent Factorization and Evolution for Spin Physics
Authors: Bipin Popat Sonawane
Abstract:
After 1988 Electron muon Collaboration (EMC) announcement of measurement of spin dependent structure function, it has been found that it has become a need to understand spin structure of a hadron. In the study of three-dimensional spin structure of a proton, we need to understand the foundation of quantum field theory in terms of electro-weak and strong theories using rigorous mathematical theories and models. In the process of understanding the inner dynamical stricture of proton we need understand the mathematical formalism in perturbative quantum chromodynamics (pQCD). In QCD processes like proton-proton collision at high energy we calculate cross section using conventional collinear factorization schemes. In this calculations, parton distribution functions (PDFs) and fragmentation function are used which provide the information about probability density of finding quarks and gluons ( partons) inside the proton and probability density of finding final hadronic state from initial partons. In transverse momentum dependent (TMD) PDFs and FFs, collectively called as TMDs, take an account for intrinsic transverse motion of partons. The TMD factorization in the calculation of cross sections provide a scheme of hadronic and partonic states in the given QCD process. In this study we review Transverse Momentum Dependent (TMD) factorization scheme using Collins-Soper-Sterman (CSS) Formalism. CSS formalism considers the transverse momentum dependence of the partons, in this formalism the cross section is written as a Fourier transform over a transverse position variable which has physical interpretation as impact parameter. Along with this we compare this formalism with improved CSS formalism. In this work we study the TMD evolution schemes and their comparison with other schemes. This would provide description in the process of measurement of transverse single spin asymmetry (TSSA) in hadro-production and electro-production of J/psi meson at RHIC, LHC, ILC energy scales. This would surely help us to understand J/psi production mechanism which is an appropriate test of QCD. Procedia PDF Downloads 69927 Magnetic Navigation in Underwater Networks
Authors: Kumar Divyendra
Abstract:
Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.Keywords: clustering, deep learning, network backbone, parallel computing
Procedia PDF Downloads 98926 Using of the Fractal Dimensions for the Analysis of Hyperkinetic Movements in the Parkinson's Disease
Authors: Sadegh Marzban, Mohamad Sobhan Sheikh Andalibi, Farnaz Ghassemi, Farzad Towhidkhah
Abstract:
Parkinson's disease (PD), which is characterized by the tremor at rest, rigidity, akinesia or bradykinesia and postural instability, affects the quality of life of involved individuals. The concept of a fractal is most often associated with irregular geometric objects that display self-similarity. Fractal dimension (FD) can be used to quantify the complexity and the self-similarity of an object such as tremor. In this work, we are aimed to propose a new method for evaluating hyperkinetic movements such as tremor, by using the FD and other correlated parameters in patients who are suffered from PD. In this study, we used 'the tremor data of Physionet'. The database consists of fourteen participants, diagnosed with PD including six patients with high amplitude tremor and eight patients with low amplitude. We tried to extract features from data, which can distinguish between patients before and after medication. We have selected fractal dimensions, including correlation dimension, box dimension, and information dimension. Lilliefors test has been used for normality test. Paired t-test or Wilcoxon signed rank test were also done to find differences between patients before and after medication, depending on whether the normality is detected or not. In addition, two-way ANOVA was used to investigate the possible association between the therapeutic effects and features extracted from the tremor. Just one of the extracted features showed significant differences between patients before and after medication. According to the results, correlation dimension was significantly different before and after the patient's medication (p=0.009). Also, two-way ANOVA demonstrates significant differences just in medication effect (p=0.033), and no significant differences were found between subject's differences (p=0.34) and interaction (p=0.97). The most striking result emerged from the data is that correlation dimension could quantify medication treatment based on tremor. This study has provided a technique to evaluate a non-linear measure for quantifying medication, nominally the correlation dimension. Furthermore, this study supports the idea that fractal dimension analysis yields additional information compared with conventional spectral measures in the detection of poor prognosis patients.Keywords: correlation dimension, non-linear measure, Parkinson’s disease, tremor
Procedia PDF Downloads 244925 A Study on Improvement of the Torque Ripple and Demagnetization Characteristics of a PMSM
Authors: Yong Min You
Abstract:
The study on the torque ripple of Permanent Magnet Synchronous Motors (PMSMs) has been rapidly progressed, which effects on the noise and vibration of the electric vehicle. There are several ways to reduce torque ripple, which are the increase in the number of slots and poles, the notch of the rotor and stator teeth, and the skew of the rotor and stator. However, the conventional methods have the disadvantage in terms of material cost and productivity. The demagnetization characteristic of PMSMs must be attained for electric vehicle application. Due to rare earth supply issue, the demand for Dy-free permanent magnet has been increasing, which can be applied to PMSMs for the electric vehicle. Dy-free permanent magnet has lower the coercivity; the demagnetization characteristic has become more significant. To improve the torque ripple as well as the demagnetization characteristics, which are significant parameters for electric vehicle application, an unequal air-gap model is proposed for a PMSM. A shape optimization is performed to optimize the design variables of an unequal air-gap model. Optimal design variables are the shape of an unequal air-gap and the angle between V-shape magnets. An optimization process is performed by Latin Hypercube Sampling (LHS), Kriging Method, and Genetic Algorithm (GA). Finite element analysis (FEA) is also utilized to analyze the torque and demagnetization characteristics. The torque ripple and the demagnetization temperature of the initial model of 45kW PMSM with unequal air-gap are 10 % and 146.8 degrees, respectively, which are reaching a critical level for electric vehicle application. Therefore, the unequal air-gap model is proposed, and then an optimization process is conducted. Compared to the initial model, the torque ripple of the optimized unequal air-gap model was reduced by 7.7 %. In addition, the demagnetization temperature of the optimized model was also increased by 1.8 % while maintaining the efficiency. From these results, a shape optimized unequal air-gap PMSM has shown the usefulness of an improvement in the torque ripple and demagnetization temperature for the electric vehicle.Keywords: permanent magnet synchronous motor, optimal design, finite element method, torque ripple
Procedia PDF Downloads 274924 Pareto Optimal Material Allocation Mechanism
Authors: Peter Egri, Tamas Kis
Abstract:
Scheduling problems have been studied by the algorithmic mechanism design research from the beginning. This paper is focusing on a practically important, but theoretically rather neglected field: the project scheduling problem where the jobs connected by precedence constraints compete for various nonrenewable resources, such as materials. Although the centralized problem can be solved in polynomial-time by applying the algorithm of Carlier and Rinnooy Kan from the Eighties, obtaining materials in a decentralized environment is usually far from optimal. It can be observed in practical production scheduling situations that project managers tend to cache the required materials as soon as possible in order to avoid later delays due to material shortages. This greedy practice usually leads both to excess stocks for some projects and materials, and simultaneously, to shortages for others. The aim of this study is to develop a model for the material allocation problem of a production plant, where a central decision maker—the inventory—should assign the resources arriving at different points in time to the jobs. Since the actual due dates are not known by the inventory, the mechanism design approach is applied with the projects as the self-interested agents. The goal of the mechanism is to elicit the required information and allocate the available materials such that it minimizes the maximal tardiness among the projects. It is assumed that except the due dates, the inventory is familiar with every other parameters of the problem. A further requirement is that due to practical considerations monetary transfer is not allowed. Therefore a mechanism without money is sought which excludes some widely applied solutions such as the Vickrey–Clarke–Groves scheme. In this work, a type of Serial Dictatorship Mechanism (SDM) is presented for the studied problem, including a polynomial-time algorithm for computing the material allocation. The resulted mechanism is both truthful and Pareto optimal. Thus the randomization over the possible priority orderings of the projects results in a universally truthful and Pareto optimal randomized mechanism. However, it is shown that in contrast to problems like the many-to-many matching market, not every Pareto optimal solution can be generated with an SDM. In addition, no performance guarantee can be given compared to the optimal solution, therefore this approximation characteristic is investigated with experimental study. All in all, the current work studies a practically relevant scheduling problem and presents a novel truthful material allocation mechanism which eliminates the potential benefit of the greedy behavior that negatively influences the outcome. The resulted allocation is also shown to be Pareto optimal, which is the most widely used criteria describing a necessary condition for a reasonable solution.Keywords: material allocation, mechanism without money, polynomial-time mechanism, project scheduling
Procedia PDF Downloads 332923 Prediction of Seismic Damage Using Scalar Intensity Measures Based on Integration of Spectral Values
Authors: Konstantinos G. Kostinakis, Asimina M. Athanatopoulou
Abstract:
A key issue in seismic risk analysis within the context of Performance-Based Earthquake Engineering is the evaluation of the expected seismic damage of structures under a specific earthquake ground motion. The assessment of the seismic performance strongly depends on the choice of the seismic Intensity Measure (IM), which quantifies the characteristics of a ground motion that are important to the nonlinear structural response. Several conventional IMs of ground motion have been used to estimate their damage potential to structures. Yet, none of them has been proved to be able to predict adequately the seismic damage. Therefore, alternative, scalar intensity measures, which take into account not only ground motion characteristics but also structural information have been proposed. Some of these IMs are based on integration of spectral values over a range of periods, in an attempt to account for the information that the shape of the acceleration, velocity or displacement spectrum provides. The adequacy of a number of these IMs in predicting the structural damage of 3D R/C buildings is investigated in the present paper. The investigated IMs, some of which are structure specific and some are nonstructure-specific, are defined via integration of spectral values. To achieve this purpose three symmetric in plan R/C buildings are studied. The buildings are subjected to 59 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along the structural axes. The response is determined by nonlinear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures are correlated with seven scalar ground motion IMs. The comparative assessment of the results revealed that the structure-specific IMs present higher correlation with the seismic damage of the three buildings. However, the adequacy of the IMs for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.Keywords: damage measures, bidirectional excitation, spectral based IMs, R/C buildings
Procedia PDF Downloads 328922 Effect of Foot Reflexology Treatment on Arterial Blood Gases among Mechanically Ventilated Patients
Authors: Maha Salah Abdullah Ismail, Manal S. Ismail, Amir M. Saleh
Abstract:
Reflexology treatment is a method for enhancing body relaxation. It is a widely recognized as an alternative therapy, effective for many health conditions. This study aimed to evaluate the effect of reflexology treatment on arterial blood gases among mechanically ventilated patients. A quasi-experimental (pre and post-test) research design was used. Research hypothesis was mechanically ventilated patients who will receive the reflexology treatment will have improvement in their arterial blood gases than those who will not. The current study was carried out in different Intensive Care Units at the Cairo University Hospitals. A purposeful sample of 100 adults’ mechanically ventilated patients was recruited over a period of three months of data collection. The participants were divided into two equally matched groups; (1) The study group who has received the routine care, in addition, two reflexology sessions on the feet, (2) The control group who has received only the routine care. One tool was utilized to collect data pertinent to the study; mechanically ventilated patients' data sheet that consists of demographic and medical data. Result: Majority (58% of the study group and 82% of the control group) were males, with mean age of 50.9 years in both groups. Patients who received the reflexology treatment significantly increase in the oxygen saturation pre second session (t=5.15, p=.000), immediate post sessions (t=4.4, p=.000) and post two hours (t= 4.7, p= .000). The study group was more likely to have lower PaO2 (F=5.025, p=.015), PaCo2 (F=4.952, p=.025) and higher HCo3 (F=15.211, p=.000) than the control group. Conclusion: This study results support the positive effect of reflexology treatment in improving some arterial blood gases among mechanically ventilated patients’ with the conventional therapy as in the study group there was increase in the oxygen saturation. In differences between groups there decrease PaO2, PaCo2 and increase HCo3 in the study group. Recommendation: Nurses should be trained how to demonstrate the foot reflexology among mechanically ventilated patients.Keywords: arterial blood gases, foot, mechanical ventilated patient, reflexology
Procedia PDF Downloads 208921 Advantages of Utilizing Post-Tensioned Stress Ribbon Systems in Long Span Roofs
Authors: Samih Ahmed, Guayente Minchot, Fritz King, Mikael Hallgren
Abstract:
The stress ribbon system has numerous advantages that include but are not limited to increasing overall stiffness, control deflections, and reduction of materials consumption, which in turn, reduces the load and the cost. Nevertheless, its use is usually limited to bridges, in particular, pedestrian bridges; this can be attributed to the insufficient space that buildings' usually have for end supports, and/or back- stayed cables, that can accommodate the expected high pull-out forces occurring at the cables' ends. In this work, the roof of Västerås Travel Center, which will become one of the longest cable suspended roofs in the world, was chosen as a case study. The aim was to investigate the optimal technique to model the post-tensioned stress ribbon system for the roof structure using the FEM software SAP2000 and to assess any possible reduction in the pull-out forces, deflections, and concrete stresses. Subsequently, a conventional cable suspended roof was simulated using SAP2000, and compared to the post-tension stress ribbon system in order to examine the potential of the latter. Moreover, the effects of temperature loads and support movements on the final design loads were examined. Based on the study, a few practical recommendations concerning the construction method and the iterative design process, required to meet the architectural geometrical demands, are stated by the authors. The results showed that the post-tensioned stress ribbon system reduces the concrete stresses, overall deflections, and more importantly, reduces the pull-out forces and the vertical reactions at both ends by up to 16% and 11%, respectively, which substantially reduces the design forces for the support structures. The magnitude of these reductions was found to be highly correlated to the applied prestressing force, making the size of the prestressing force a key factor in the design.Keywords: cable suspended, post-tension, roof structure, SAP2000, stress ribbon
Procedia PDF Downloads 159920 Postoperative Budesonide Nasal Irrigation vs Normal Saline Irrigation for Chronic Rhinosinusitis: A Systematic Review and Meta-Analysis
Authors: Rakan Hassan M. Alzahrani, Ziyad Alzahrani, Bader Bashrahil, Abdulrahman Elyasi, Abdullah a Ghaddaf, Rayan Alzahrani, Mohammed Alkathlan, Nawaf Alghamdi, Dakheelallah Almutairi
Abstract:
Background: Corticosteroid irrigations, which regularly involve the off-label use of budesonide mixed with normal saline in high volume Sino-nasal irrigations, have been more commonly used in the management of post-operative chronic rhinosinusitis (CRS). Objective: This article attempted to measure the efficacy of post-operative budesonide nasal irrigation compared to normal saline-alone nasal irrigation in the management of chronic rhinosinusitis (CRS) through a systematic review and meta-analysis of randomized controlled trials (RCTs). Methods: The databases PubMed, Embase, and Cochrane Central Register of Controlled Trials were searched by two independent authors. Only RCTs comparing budesonide irrigation to normal saline alone irrigation for CRS with or without polyposis after functional endoscopic sinus surgery (FESS) were eligible. A random effect analysis model of the reported CRS-related quality of life (QOL) measures and the objective endoscopic assessment scales of the disease was done. Results: Only 6 RCTs met the eligibility criteria, with a total number of participants of 356. Compared to normal saline irrigation, budesonide nasal irrigation showed statically significant improvements in both the CRS-related quality of life (QOL) and the endoscopic findings (MD= -4.22 confidence interval [CI]: -5.63, -2.82 [P < 0.00001]), (SMD= -0.50 confidence interval [CI]: -0.93, -0.06 [P < 0.03]) respectively. Conclusion: Both intervention arms showed improvements in CRS-related QOL and endoscopic findings in post-FESS chronic rhinosinusitis with or without polyposis. However, budesonide irrigation seems to have a slight edge over conventional normal saline irrigation with no reported serious side effects, including hypothalamic-pituitary-adrenal (HPA) axis suppression.Keywords: Budesonide, chronic rhinosinusitis, corticosteroids, nasal irrigation, normal saline
Procedia PDF Downloads 78919 A Microsurgery-Specific End-Effector Equipped with a Bipolar Surgical Tool and Haptic Feedback
Authors: Hamidreza Hoshyarmanesh, Sanju Lama, Garnette R. Sutherland
Abstract:
In tele-operative robotic surgery, an ideal haptic device should be equipped with an intuitive and smooth end-effector to cover the surgeon’s hand/wrist degrees of freedom (DOF) and translate the hand joint motions to the end-effector of the remote manipulator with low effort and high level of comfort. This research introduces the design and development of a microsurgery-specific end-effector, a gimbal mechanism possessing 4 passive and 1 active DOFs, equipped with a bipolar forceps and haptic feedback. The robust gimbal structure is comprised of three light-weight links/joint, pitch, yaw, and roll, each consisting of low-friction support and a 2-channel accurate optical position sensor. The third link, which provides the tool roll, was specifically designed to grip the tool prongs and accommodate a low mass geared actuator together with a miniaturized capstan-rope mechanism. The actuator is able to generate delicate torques, using a threaded cylindrical capstan, to emulate the sense of pinch/coagulation during conventional microsurgery. While the tool left prong is fixed to the rolling link, the right prong bears a miniaturized drum sector with a large diameter to expand the force scale and resolution. The drum transmits the actuator output torque to the right prong and generates haptic force feedback at the tool level. The tool is also equipped with a hall-effect sensor and magnet bar installed vis-à-vis on the inner side of the two prongs to measure the tooltip distance and provide an analogue signal to the control system. We believe that such a haptic end-effector could significantly increase the accuracy of telerobotic surgery and help avoid high forces that are known to cause bleeding/injury.Keywords: end-effector, force generation, haptic interface, robotic surgery, surgical tool, tele-operation
Procedia PDF Downloads 118918 Estimating the Traffic Impacts of Green Light Optimal Speed Advisory Systems Using Microsimulation
Authors: C. B. Masera, M. Imprialou, L. Budd, C. Morton
Abstract:
Even though signalised intersections are necessary for urban road traffic management, they can act as bottlenecks and disrupt traffic operations. Interrupted traffic flow causes congestion, delays, stop-and-go conditions (i.e. excessive acceleration/deceleration) and longer journey times. Vehicle and infrastructure connectivity offers the potential to provide improved new services with additional functions of assisting drivers. This paper focuses on one of the applications of vehicle-to-infrastructure communication namely Green Light Optimal Speed Advisory (GLOSA). To assess the effectiveness of GLOSA in the urban road network, an integrated microscopic traffic simulation framework is built into VISSIM software. Vehicle movements and vehicle-infrastructure communications are simulated through the interface of External Driver Model. A control algorithm is developed for recommending an optimal speed that is continuously updated in every time step for all vehicles approaching a signal-controlled point. This algorithm allows vehicles to pass a traffic signal without stopping or to minimise stopping times at a red phase. This study is performed with all connected vehicles at 100% penetration rate. Conventional vehicles are also simulated in the same network as a reference. A straight road segment composed of two opposite directions with two traffic lights per lane is studied. The simulation is implemented under 150 vehicles per hour and 200 per hour traffic volume conditions to identify how different traffic densities influence the benefits of GLOSA. The results indicate that traffic flow is improved by the application of GLOSA. According to this study, vehicles passed through the traffic lights more smoothly, and waiting times were reduced by up to 28 seconds. Average delays decreased for the entire network by 86.46% and 83.84% under traffic densities of 150 vehicles per hour per lane and 200 vehicles per hour per lane, respectively.Keywords: connected vehicles, GLOSA, intelligent transport systems, vehicle-to-infrastructure communication
Procedia PDF Downloads 171917 Structural Design Optimization of Reinforced Thin-Walled Vessels under External Pressure Using Simulation and Machine Learning Classification Algorithm
Authors: Lydia Novozhilova, Vladimir Urazhdin
Abstract:
An optimization problem for reinforced thin-walled vessels under uniform external pressure is considered. The conventional approaches to optimization generally start with pre-defined geometric parameters of the vessels, and then employ analytic or numeric calculations and/or experimental testing to verify functionality, such as stability under the projected conditions. The proposed approach consists of two steps. First, the feasibility domain will be identified in the multidimensional parameter space. Every point in the feasibility domain defines a design satisfying both geometric and functional constraints. Second, an objective function defined in this domain is formulated and optimized. The broader applicability of the suggested methodology is maximized by implementing the Support Vector Machines (SVM) classification algorithm of machine learning for identification of the feasible design region. Training data for SVM classifier is obtained using the Simulation package of SOLIDWORKS®. Based on the data, the SVM algorithm produces a curvilinear boundary separating admissible and not admissible sets of design parameters with maximal margins. Then optimization of the vessel parameters in the feasibility domain is performed using the standard algorithms for the constrained optimization. As an example, optimization of a ring-stiffened closed cylindrical thin-walled vessel with semi-spherical caps under high external pressure is implemented. As a functional constraint, von Mises stress criterion is used but any other stability constraint admitting mathematical formulation can be incorporated into the proposed approach. Suggested methodology has a good potential for reducing design time for finding optimal parameters of thin-walled vessels under uniform external pressure.Keywords: design parameters, feasibility domain, von Mises stress criterion, Support Vector Machine (SVM) classifier
Procedia PDF Downloads 327916 Biochemical Characterization and Structure Elucidation of a New Cytochrome P450 Decarboxylase
Authors: Leticia Leandro Rade, Amanda Silva de Sousa, Suman Das, Wesley Generoso, Mayara Chagas Ávila, Plinio Salmazo Vieira, Antonio Bonomi, Gabriela Persinoti, Mario Tyago Murakami, Thomas Michael Makris, Leticia Maria Zanphorlin
Abstract:
Alkenes have an economic appeal, especially in the biofuels field, since they are precursors for drop-in biofuels production, which have similar chemical and physical properties to the conventional fossil fuels, with no oxygen in their composition. After the discovery of the first P450 CYP152 OleTJE in 2011, reported with its unique property of decarboxylating fatty acids (FA), by using hydrogen peroxide as a cofactor and producing 1-alkenes as the main product, the scientific and technological interest in this family of enzymes vastly increased. In this context, the present work presents a new decarboxylase (OleTRN) with low similarity with OleTJE (32%), its biochemical characterization, and structure elucidation. As main results, OleTRN presented a high yield of expression and purity, optimum reaction conditions at 35 °C and pH from 6.5 to 8.0, and higher specificity for oleic acid. Besides that, structure-guided mutations were performed and according to the functional characterizations, it was observed that some mutations presented different specificity and chemoselectivity by varying the chain-length of FA substrates from 12 to 20 carbons. These results are extremely interesting from a biotechnological perspective as those characteristics could diversify the applications and contribute to designing better cytochrome P450 decarboxylases. Considering that peroxygenases have the potential activity of decarboxylating and hydroxylating fatty acids and that the elucidation of the intriguing mechanistic involved in the decarboxylation preferential from OleTJE is still a challenge, the elucidation of OleTRN structure and the functional characterizations of OleTRN and its mutants contribute to new information about CYP152. Besides that, the work also contributed to the discovery of a new decarboxylase with a different selectivity profile from OleTJE, which allows a wide range of applications.Keywords: P450, decarboxylases, alkenes, biofuels
Procedia PDF Downloads 202915 Submarine Topography and Beach Survey of Gang-Neung Port in South Korea, Using Multi-Beam Echo Sounder and Shipborne Mobile Light Detection and Ranging System
Authors: Won Hyuck Kim, Chang Hwan Kim, Hyun Wook Kim, Myoung Hoon Lee, Chan Hong Park, Hyeon Yeong Park
Abstract:
We conducted submarine topography & beach survey from December 2015 and January 2016 using multi-beam echo sounder EM3001(Kongsberg corporation) & Shipborne Mobile LiDAR System. Our survey area were the Anmok beach in Gangneung, South Korea. We made Shipborne Mobile LiDAR System for these survey. Shipborne Mobile LiDAR System includes LiDAR (RIEGL LMS-420i), IMU ((Inertial Measurement Unit, MAGUS Inertial+) and RTKGNSS (Real Time Kinematic Global Navigation Satellite System, LEIAC GS 15 GS25) for beach's measurement, LiDAR's motion compensation & precise position. Shipborne Mobile LiDAR System scans beach on the movable vessel using the laser. We mounted Shipborne Mobile LiDAR System on the top of the vessel. Before beach survey, we conducted eight circles IMU calibration survey for stabilizing heading of IMU. This exploration should be as close as possible to the beach. But our vessel could not come closer to the beach because of latency objects in the water. At the same time, we conduct submarine topography survey using multi-beam echo sounder EM3001. A multi-beam echo sounder is a device observing and recording the submarine topography using sound wave. We mounted multi-beam echo sounder on left side of the vessel. We were equipped with a motion sensor, DGNSS (Differential Global Navigation Satellite System), and SV (Sound velocity) sensor for the vessel's motion compensation, vessel's position, and the velocity of sound of seawater. Shipborne Mobile LiDAR System was able to reduce the consuming time of beach survey rather than previous conventional methods of beach survey.Keywords: Anmok, beach survey, Shipborne Mobile LiDAR System, submarine topography
Procedia PDF Downloads 429914 Effect of Lithium Bromide Concentration on the Structure and Performance of Polyvinylidene Fluoride (PVDF) Membrane for Wastewater Treatment
Authors: Poojan Kothari, Yash Madhani, Chayan Jani, Bharti Saini
Abstract:
The requirements for quality drinking and industrial water are increasing and water resources are depleting. Moreover large amount of wastewater is being generated and dumped into water bodies without treatment. These have made improvement in water treatment efficiency and its reuse, an important agenda. Membrane technology for wastewater treatment is an advanced process and has become increasingly popular in past few decades. There are many traditional methods for tertiary treatment such as chemical coagulation, adsorption, etc. However recent developments in membrane technology field have led to manufacturing of better quality membranes at reduced costs. This along with the high costs of conventional treatment processes, high separation efficiency and relative simplicity of the membrane treatment process has made it an economically viable option for municipal and industrial purposes. Ultrafiltration polymeric membranes can be used for wastewater treatment and drinking water applications. The proposed work focuses on preparation of one such UF membrane - Polyvinylidene fluoride (PVDF) doped with LiBr for wastewater treatment. Majorly all polymeric membranes are hydrophobic in nature. This property leads to repulsion of water and hence solute particles occupy the pores, decreasing the lifetime of a membrane. Thus modification of membrane through addition of small amount of salt such as LiBr helped us attain certain characteristics of membrane, which can then be used for wastewater treatment. The membrane characteristics are investigated through measuring its various properties such as porosity, contact angle and wettability to find out the hydrophilic nature of the membrane and morphology (surface as well as structure). Pure water flux, solute rejection and permeability of membrane is determined by permeation experiments. A study of membrane characteristics with various concentration of LiBr helped us to compare its effectivity.Keywords: Lithium bromide (LiBr), morphology, permeability, Polyvinylidene fluoride (PVDF), solute rejection, wastewater treatment
Procedia PDF Downloads 147913 Satellite Derived Evapotranspiration and Turbulent Heat Fluxes Using Surface Energy Balance System (SEBS)
Authors: Muhammad Tayyab Afzal, Muhammad Arslan, Mirza Muhammad Waqar
Abstract:
One of the key components of the water cycle is evapotranspiration (ET), which represents water consumption by vegetated and non-vegetated surfaces. Conventional techniques for measurements of ET are point based and representative of the local scale only. Satellite remote sensing data with large area coverage and high temporal frequency provide representative measurements of several relevant biophysical parameters required for estimation of ET at regional scales. The objective is of this research is to exploit satellite data in order to estimate evapotranspiration. This study uses Surface Energy Balance System (SEBS) model to calculate daily actual evapotranspiration (ETa) in Larkana District, Sindh Pakistan using Landsat TM data for clouds-free days. As there is no flux tower in the study area for direct measurement of latent heat flux or evapotranspiration and sensible heat flux, therefore, the model estimated values of ET were compared with reference evapotranspiration (ETo) computed by FAO-56 Penman Monteith Method using meteorological data. For a country like Pakistan, agriculture by irrigation in the river basins is the largest user of fresh water. For the better assessment and management of irrigation water requirement, the estimation of consumptive use of water for agriculture is very important because it is the main consumer of water. ET is yet an essential issue of water imbalance due to major loss of irrigation water and precipitation on cropland. As large amount of irrigated water is lost through ET, therefore its accurate estimation can be helpful for efficient management of irrigation water. Results of this study can be used to analyse surface conditions, i.e. temperature, energy budgets and relevant characteristics. Through this information we can monitor vegetation health and suitable agricultural conditions and can take controlling steps to increase agriculture production.Keywords: SEBS, remote sensing, evapotranspiration, ETa
Procedia PDF Downloads 333912 Gas While Drilling (GWD) Classification in Betara Complex; An Effective Approachment to Optimize Future Candidate of Gumai Reservoir
Authors: I. Gusti Agung Aditya Surya Wibawa, Andri Syafriya, Beiruny Syam
Abstract:
Gumai Formation which acts as regional seal for Talang Akar Formation becomes one of the most prolific reservoir in South Sumatra Basin and the primary exploration target in this area. Marine conditions were eventually established during the continuation of transgression sequence leads an open marine facies deposition in Early Miocene. Marine clastic deposits where calcareous shales, claystone and siltstones interbedded with fine-grained calcareous and glauconitic sandstones are the domination of lithology which targeted as the hydrocarbon reservoir. All this time, the main objective of PetroChina’s exploration and production in Betara area is only from Lower Talang Akar Formation. Successful testing in some exploration wells which flowed gas & condensate from Gumai Formation, opened the opportunity to optimize new reservoir objective in Betara area. Limitation of conventional wireline logs data in Gumai interval is generating technical challenge in term of geological approach. A utilization of Gas While Drilling indicator initiated with the objective to determine the next Gumai reservoir candidate which capable to increase Jabung hydrocarbon discoveries. This paper describes how Gas While Drilling indicator is processed to generate potential and non-potential zone by cut-off analysis. Validation which performed by correlation and comparison with well logs, Drill Stem Test (DST), and Reservoir Performance Monitor (RPM) data succeed to observe Gumai reservoir in Betara Complex. After we integrated all of data, we are able to generate a Betara Complex potential map and overlaid with reservoir characterization distribution as a part of risk assessment in term of potential zone presence. Mud log utilization and geophysical data information successfully covered the geological challenges in this study.Keywords: Gumai, gas while drilling, classification, reservoir, potential
Procedia PDF Downloads 355911 The Effects of Damping Devices on Displacements, Velocities and Accelerations of Structures
Authors: Radhwane Boudjelthia
Abstract:
The most recent earthquakes that occurred in the world and particularly in Algeria, have killed thousands of people and severe damage. The example that is etched in our memory is the last earthquake in the regions of Boumerdes and Algiers (Boumerdes earthquake of May 21, 2003). For all the actors involved in the building process, the earthquake is the litmus test for construction. The goal we set ourselves is to contribute to the implementation of a thoughtful approach to the seismic protection of structures. For many engineers, the most conventional approach protection works (buildings and bridges) the effects of earthquakes is to increase rigidity. This approach is not always effective, especially when there is a context that favors the phenomenon of resonance and amplification of seismic forces. Therefore, the field of earthquake engineering has made significant inroads among others catalyzed by the development of computational techniques in computer form and the use of powerful test facilities. This has led to the emergence of several innovative technologies, such as the introduction of special devices insulation between infrastructure and superstructure. This approach, commonly known as "seismic isolation" to absorb the significant efforts without the structure is damaged and thus ensuring the protection of lives and property. In addition, the restraints to the construction by the ground shaking are located mainly at the supports. With these moves, the natural period of construction is increasing, and seismic loads are reduced. Thus, there is an attenuation of the seismic movement. Likewise, the insulation of the base mechanism may be used in combination with earthquake dampers in order to control the deformation of the insulation system and the absolute displacement of the superstructure located above the isolation interface. On the other hand, only can use these earthquake dampers to reduce the oscillation amplitudes and thus reduce seismic loads. The use of damping devices represents an effective solution for the rehabilitation of existing structures. Given all these acceleration reducing means considered passive, much research has been conducted for several years to develop an active control system of the response of buildings to earthquakes.Keywords: earthquake, building, seismic forces, displacement, resonance, response
Procedia PDF Downloads 127910 A Method for Precise Vertical Position of the Implant When Using Computerized Surgical Guides and Bone Reduction
Authors: Abraham Finkelman
Abstract:
Computerized Surgical Guides have been proven to be a predictable way to perform dental implants, with a relatively high accuracy in comparison to a treatment plan. When using the CSG Bone supported, it allows us to make the necessary changes of the hard tissue prior to the implant placement and after the implant placement. The CSG gives us an accurate position for the drilling, and during the implant placement it allows us to alter the vertical position of the implant altering the final position of the abutment and avoiding any risk of any damage to the adjacent anatomical structures. Any Changes required to the bone level can be done prior to the fixation of the CSG using a reduction guide, which incur extra surgical fees and the need of a second surgical guide. Any changes of the bone level after the implant placement are at the risk of damaging the implant neck surface. The technique consists of a universal system that allows us to remove the excess bone around the implant sockets prior to the implant placement which then enables us to place the implant in the vertical position with accuracy as planned with the CSG. The systems consist of a hollow pin of different sizes and diameters. Depending on the implant system that we are using. Length sizes are from 6mm-16mm and a diameter of 2.6mm-4.8mm. Upon the completion of the drilling, the pin is then inserted into the implant socket-using the insertion tool. Once the insertion tool has unscrewed the pin, we can continue with the bone reduction. The bone reduction can be done using conventional methods upon the removal of all the excess bone around the pin. The insertion tool is then screwed into the pin and the pin is then removed. We now, have the new bone level at the crest of the implant socket which is our mark for the vertical position of the implant. In some cases, when we are locating the implant very close to anatomical structures, any form of deviation to the vertical position of the implant during the surgery, can cause damage to such anatomical structures, creating irreversible damages such as paresthesia or dysesthesia of the mandibular nerve. If we are planning for immediate loading and we have done our temporary restauration in base of our computerized plan, deviation in the vertical position of the implant will affect the position of the abutment, affecting the accuracy of the temporary prosthesis, extending the working time till we adapt the prosthesis to the new position.Keywords: bone reduction, computer aided navigation, dental implant placement, surgical guides
Procedia PDF Downloads 331909 Enhancing Health Information Management with Smart Rings
Authors: Bhavishya Ramchandani
Abstract:
A little electronic device that is worn on the finger is called a smart ring. It incorporates mobile technology and has features that make it simple to use the device. These gadgets, which resemble conventional rings and are usually made to fit on the finger, are outfitted with features including access management, gesture control, mobile payment processing, and activity tracking. A poor sleep pattern, an irregular schedule, and bad eating habits are all part of the problems with health that a lot of people today are facing. Diets lacking fruits, vegetables, legumes, nuts, and whole grains are common. Individuals in India also experience metabolic issues. In the medical field, smart rings will help patients with problems relating to stomach illnesses and the incapacity to consume meals that are tailored to their bodies' needs. The smart ring tracks all bodily functions, including blood sugar and glucose levels, and presents the information instantly. Based on this data, the ring generates what the body will find to be perfect insights and a workable site layout. In addition, we conducted focus groups and individual interviews as part of our core approach and discussed the difficulties they're having maintaining the right diet, as well as whether or not the smart ring will be beneficial to them. However, everyone was very enthusiastic about and supportive of the concept of using smart rings in healthcare, and they believed that these rings may assist them in maintaining their health and having a well-balanced diet plan. This response came from the primary data, and also working on the Emerging Technology Canvas Analysis of smart rings in healthcare has led to a significant improvement in our understanding of the technology's application in the medical field. It is believed that there will be a growing demand for smart health care as people become more conscious of their health. The majority of individuals will finally utilize this ring after three to four years when demand for it will have increased. Their daily lives will be significantly impacted by it.Keywords: smart ring, healthcare, electronic wearable, emerging technology
Procedia PDF Downloads 64908 The Attitudinal Effects of Dental Hygiene Students When Changing Conventional Practices of Preventive Therapy in the Dental Hygiene Curriculum
Authors: Shawna Staud, Mary Kaye Scaramucci
Abstract:
Objective: Rubber cup polishing has been a traditional method of preventative therapy in dental hygiene treatment. Newer methods such as air polishing have changed the way dental hygiene care is provided, yet this technique has not been embraced by students in the program nor by practitioners in the workforce. Students entering the workforce tend to follow office protocol and are limited in confidence to introduce technologies learned in the curriculum. This project was designed to help students gain confidence in newer skills and encourage private practice settings to adopt newer technologies for patient care. Our program recently introduced air polishing earlier in the program before the rubber cup technique to determine if students would embrace the technology to become leading-edge professionals when they enter the marketplace. Methods: The class of 2022 was taught the traditional method of polishing in the first-year curriculum and air polishing in the second-year curriculum. The class of 2023 will be taught the air polishing method in the first-year curriculum and the traditional method of polishing in the second-year curriculum. Pre- and post-graduation survey data will be collected from both cohorts. Descriptive statistics and pre and post-paired t-tests with alpha set at .05 to compare pre and post-survey results will be used to assess data. Results: This study is currently in progress, with a completion date of October 2023. The class of 2022 completed the pre-graduation survey in the spring of 2022. The post-gradation survey will be sent out in October 2022. The class of 2023 cohort will be surveyed in the spring of 2023 and October 2023. Conclusion: Our hypothesis is students who are taught air polishing first will be more inclined to adopt that skill in private practice, thereby embracing newer technology and improving oral health care.Keywords: luggage handling system at world’s largest pilgrimage center
Procedia PDF Downloads 103907 The Feasibility of a Protected Launch Site near Melkbosstrand for a Public Transport Ferry across Table Bay, Cape Town
Authors: Mardi Falck, André Theron
Abstract:
Traffic congestion on the Northern side of Table Bay is a major problem. In Gauteng, the implementation of the Gautrain between Pretoria and Johannesburg, solved their traffic congestion. In 2002 two entrepreneurs endeavoured to implement a hovercraft ferry service across the bay from Table View to the Port of Cape Town. However, the EIA process proved that disgruntled residents from the area did not agree with their location for a launch site. 17 years later the traffic problem has not gone away, but instead the congestion has increased. While property prices in the City Bowl of Cape Town are ever increasing, people tend to live more on the outskirts of the CBD and commute to work. This means more vehicles on the road every day and the public transport services cannot keep up with the demand. For this reason, the study area of the previous hovercraft plans is being extended further North. The study’s aim is thus to determine the feasibility of a launch site North of Bloubergstrand to launch and receive a public transport ferry across Table Bay. The feasibility is being established by researching ferry services across the world and on what makes them successful. Different types of ferries and their operational capacities in terms of weather and waves are researched and by establishing the offshore and nearshore wind and wave climate for the area, an appropriate protected launch site is determined. It was concluded that travel time could potentially be halved. A hovercraft proved to be the most feasible ferry type, because it does not require a conventional harbour. Other types of vessels require a protected launch site because of the wave climate. This means large breakwaters that influence the cost substantially. The Melkbos Cultural Centre proved to be the most viable option for the location of the launch site, because it already has buildings and infrastructure. It is recommended that, if a harbour is chosen for the proposed ferry service, it could be used for more services like fishing, eco-tourism and leisure. Further studies are recommended to optimise the feasibility of such a harbour.Keywords: Cape Town, ferry, public, Table Bay
Procedia PDF Downloads 152