Search results for: cloud point
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5532

Search results for: cloud point

4272 Using Geospatial Analysis to Reconstruct the Thunderstorm Climatology for the Washington DC Metropolitan Region

Authors: Mace Bentley, Zhuojun Duan, Tobias Gerken, Dudley Bonsal, Henry Way, Endre Szakal, Mia Pham, Hunter Donaldson, Chelsea Lang, Hayden Abbott, Leah Wilcynzski

Abstract:

Air pollution has the potential to modify the lifespan and intensity of thunderstorms and the properties of lightning. Using data mining and geovisualization, we investigate how background climate and weather conditions shape variability in urban air pollution and how this, in turn, shapes thunderstorms as measured by the intensity, distribution, and frequency of cloud-to-ground lightning. A spatiotemporal analysis was conducted in order to identify thunderstorms using high-resolution lightning detection network data. Over seven million lightning flashes were used to identify more than 196,000 thunderstorms that occurred between 2006 - 2020 in the Washington, DC Metropolitan Region. Each lightning flash in the dataset was grouped into thunderstorm events by means of a temporal and spatial clustering algorithm. Once the thunderstorm event database was constructed, hourly wind direction, wind speed, and atmospheric thermodynamic data were added to the initiation and dissipation times and locations for the 196,000 identified thunderstorms. Hourly aerosol and air quality data for the thunderstorm initiation times and locations were also incorporated into the dataset. Developing thunderstorm climatologies using a lightning tracking algorithm and lightning detection network data was found to be useful for visualizing the spatial and temporal distribution of urban augmented thunderstorms in the region.

Keywords: lightning, urbanization, thunderstorms, climatology

Procedia PDF Downloads 75
4271 Amblyopia and Eccentric Fixation

Authors: Kristine Kalnica-Dorosenko, Aiga Svede

Abstract:

Amblyopia or 'lazy eye' is impaired or dim vision without obvious defect or change in the eye. It is often associated with abnormal visual experience, most commonly strabismus, anisometropia or both, and form deprivation. The main task of amblyopia treatment is to ameliorate etiological factors to create a clear retinal image and, to ensure the participation of the amblyopic eye in the visual process. The treatment of amblyopia and eccentric fixation is usually associated with problems in the therapy. Eccentric fixation is present in around 44% of all patients with amblyopia and in 30% of patients with strabismic amblyopia. In Latvia, amblyopia is carefully treated in various clinics, but eccentricity diagnosis is relatively rare. Conflict which has developed relating to the relationship between the visual disorder and the degree of eccentric fixation in amblyopia should to be rethoughted, because it has an important bearing on the cause and treatment of amblyopia, and the role of the eccentric fixation in this case. Visuoscopy is the most frequently used method for determination of eccentric fixation. With traditional visuoscopy, a fixation target is projected onto the patient retina, and the examiner asks to look straight directly at the center of the target. An optometrist then observes the point on the macula used for fixation. This objective test provides clinicians with direct observation of the fixation point of the eye. It requires patients to voluntarily fixate the target and assumes the foveal reflex accurately demarcates the center of the foveal pit. In the end, by having a very simple method to evaluate fixation, it is possible to indirectly evaluate treatment improvement, as eccentric fixation is always associated with reduced visual acuity. So, one may expect that if eccentric fixation in amlyopic eye is found with visuoscopy, then visual acuity should be less than 1.0 (in decimal units). With occlusion or another amblyopia therapy, one would expect both visual acuity and fixation to improve simultaneously, that is fixation would become more central. Consequently, improvement in fixation pattern by treatment is an indirect measurement of improvement of visual acuity. Evaluation of eccentric fixation in the child may be helpful in identifying amblyopia in children prior to measurement of visual acuity. This is very important because the earlier amblyopia is diagnosed – the better the chance of improving visual acuity.

Keywords: amblyopia, eccentric fixation, visual acuity, visuoscopy

Procedia PDF Downloads 158
4270 Sensitivity Assessment of Spectral Salinity Indices over Desert Sabkha of Western UAE

Authors: Rubab Ammad, Abdelgadir Abuelgasim

Abstract:

UAE typically lies in one of the aridest regions of the world and is thus home to geologic features common to such climatic conditions including vast open deserts, sand dunes, saline soils, inland Sabkha and coastal Sabkha. Sabkha are characteristic salt flats formed in arid environment due to deposition and precipitation of salt and silt over sand surface because of low laying water table and rates of evaporation exceeding rates of precipitation. The study area, which comprises of western UAE, is heavily concentrated with inland Sabkha. Remote sensing is conventionally used to study the soil salinity of agriculturally degraded lands but not so broadly for Sabkha. The focus of this study was to identify these highly saline Sabkha areas on remotely sensed data, using salinity indices. The existing salinity indices in the literature have been designed for agricultural soils and they have not frequently used the spectral response of short-wave infra-red (SWIR1 and SWIR2) parts of electromagnetic spectrum. Using Landsat 8 OLI data and field ground truthing, this study formulated indices utilizing NIR-SWIR parts of spectrum and compared the results with existing salinity indices. Most indices depict reasonably good relationship between salinity and spectral index up until a certain value of salinity after which the reflectance reaches a saturation point. This saturation point varies with index. However, the study findings suggest a role of incorporating near infra-red and short-wave infra-red in salinity index with a potential of showing a positive relationship between salinity and reflectance up to a higher salinity value, compared to rest.

Keywords: Sabkha, salinity index, saline soils, Landsat 8, SWIR1, SWIR2, UAE desert

Procedia PDF Downloads 214
4269 The Psychology of Cross-Cultural Communication: A Socio-Linguistics Perspective

Authors: Tangyie Evani, Edmond Biloa, Emmanuel Nforbi, Lem Lilian Atanga, Kom Beatrice

Abstract:

The dynamics of languages in contact necessitates a close study of how its users negotiate meanings from shared values in the process of cross-cultural communication. A transverse analysis of the situation demonstrates the existence of complex efforts on connecting cultural knowledge to cross-linguistic competencies within a widening range of communicative exchanges. This paper sets to examine the psychology of cross-cultural communication in a multi-linguistic setting like Cameroon where many local and international languages are in close contact. The paper equally analyses the pertinence of existing macro sociological concepts as fundamental knowledge traits in literal and idiomatic cross semantic mapping. From this point, the article presents a path model of connecting sociolinguistics to the increasing adoption of a widening range of communicative genre piloted by the on-going globalisation trends with its high-speed information technology machinery. By applying a cross cultural analysis frame, the paper will be contributing to a better understanding of the fundamental changes in the nature and goals of cross-cultural knowledge in pragmatics of communication and cultural acceptability’s. It emphasises on the point that, in an era of increasing global interchange, a comprehensive inclusive global culture through bridging gaps in cross-cultural communication would have significant potentials to contribute to achieving global social development goals, if inadequacies in language constructs are adjusted to create avenues that intertwine with sociocultural beliefs, ensuring that meaningful and context bound sociolinguistic values are observed within the global arena of communication.

Keywords: cross-cultural communication, customary language, literalisms, primary meaning, subclasses, transubstantiation

Procedia PDF Downloads 285
4268 Dynamic and Thermal Characteristics of Three-Dimensional Turbulent Offset Jet

Authors: Ali Assoudi, Sabra Habli, Nejla Mahjoub Saïd, Philippe Bournot, Georges Le Palec

Abstract:

Studying the flow characteristics of a turbulent offset jet is an important topic among researchers across the world because of its various engineering applications. Some of the common examples include: injection and carburetor systems, entrainment and mixing process in gas turbine and boiler combustion chambers, Thrust-augmenting ejectors for V/STOL aircrafts and HVAC systems, environmental dischargers, film cooling and many others. An offset jet is formed when a jet discharges into a medium above a horizontal solid wall parallel to the axis of the jet exit but which is offset by a certain distance. The structure of a turbulent offset-jet can be described by three main regions. Close to the nozzle exit, an offset jet possesses characteristic features similar to those of free jets. Then, the entrainment of fluid between the jet, the offset wall and the bottom wall creates a low pressure zone, forcing the jet to deflect towards the wall and eventually attaches to it at the impingement point. This is referred to as the Coanda effect. Further downstream after the reattachment point, the offset jet has the characteristics of a wall jet flow. Therefore, the offset jet has characteristics of free, impingement and wall jets, and it is relatively more complex compared to these types of flows. The present study examines the dynamic and thermal evolution of a 3D turbulent offset jet with different offset height ratio (the ratio of the distance from the jet exit to the impingement bottom wall and the jet nozzle diameter). To achieve this purpose a numerical study was conducted to investigate a three-dimensional offset jet flow through the resolution of the different governing Navier–Stokes’ equations by means of the finite volume method and the RSM second-order turbulent closure model. A detailed discussion has been provided on the flow and thermal characteristics in the form of streamlines, mean velocity vector, pressure field and Reynolds stresses.

Keywords: offset jet, offset ratio, numerical simulation, RSM

Procedia PDF Downloads 304
4267 Comparison of Cervical Length Using Transvaginal Ultrasonography and Bishop Score to Predict Succesful Induction

Authors: Lubena Achmad, Herman Kristanto, Julian Dewantiningrum

Abstract:

Background: The Bishop score is a standard method used to predict the success of induction. This examination tends to be subjective with high inter and intraobserver variability, so it was presumed to have a low predictive value in terms of the outcome of labor induction. Cervical length measurement using transvaginal ultrasound is considered to be more objective to assess the cervical length. Meanwhile, this examination is not a complicated procedure and less invasive than vaginal touché. Objective: To compare transvaginal ultrasound and Bishop score in predicting successful induction. Methods: This study was a prospective cohort study. One hundred and twenty women with singleton pregnancies undergoing induction of labor at 37 – 42 weeks and met inclusion and exclusion criteria were enrolled in this study. Cervical assessment by both transvaginal ultrasound and Bishop score were conducted prior induction. The success of labor induction was defined as an ability to achieve active phase ≤ 12 hours after induction. To figure out the best cut-off point of cervical length and Bishop score, receiver operating characteristic (ROC) curves were plotted. Logistic regression analysis was used to determine which factors best-predicted induction success. Results: This study showed significant differences in terms of age, premature rupture of the membrane, the Bishop score, cervical length and funneling as significant predictors of successful induction. Using ROC curves found that the best cut-off point for prediction of successful induction was 25.45 mm for cervical length and 3 for Bishop score. Logistic regression was performed and showed only premature rupture of membranes and cervical length ≤ 25.45 that significantly predicted the success of labor induction. By excluding premature rupture of the membrane as the indication of induction, cervical length less than 25.3 mm was a better predictor of successful induction. Conclusion: Compared to Bishop score, cervical length using transvaginal ultrasound was a better predictor of successful induction.

Keywords: Bishop Score, cervical length, induction, successful induction, transvaginal sonography

Procedia PDF Downloads 325
4266 Bioinformatics Approach to Support Genetic Research in Autism in Mali

Authors: M. Kouyate, M. Sangare, S. Samake, S. Keita, H. G. Kim, D. H. Geschwind

Abstract:

Background & Objectives: Human genetic studies can be expensive, even unaffordable, in developing countries, partly due to the sequencing costs. Our aim is to pilot the use of bioinformatics tools to guide scientifically valid, locally relevant, and economically sound autism genetic research in Mali. Methods: The following databases, NCBI, HGMD, and LSDB, were used to identify hot point mutations. Phenotype, transmission pattern, theoretical protein expression in the brain, the impact of the mutation on the 3D structure of the protein) were used to prioritize selected autism genes. We used the protein database, Modeller, and clustal W. Results: We found Mef2c (Gly27Ala/Leu38Gln), Pten (Thr131IIle), Prodh (Leu289Met), Nme1 (Ser120Gly), and Dhcr7 (Pro227Thr/Glu224Lys). These mutations were associated with endonucleases BseRI, NspI, PfrJS2IV, BspGI, BsaBI, and SpoDI, respectively. Gly27Ala/Leu38Gln mutations impacted the 3D structure of the Mef2c protein. Mef2c protein sequences across species showed a high percentage of similarity with a highly conserved MADS domain. Discussion: Mef2c, Pten, Prodh, Nme1, and Dhcr 7 gene mutation frequencies in the Malian population will be very informative. PCR coupled with restriction enzyme digestion can be used to screen the targeted gene mutations. Sanger sequencing will be used for confirmation only. This will cut down considerably the sequencing cost for gene-to-gene mutation screening. The knowledge of the 3D structure and potential impact of the mutations on Mef2c protein informed the protein family and altered function (ex. Leu38Gln). Conclusion & Future Work: Bio-informatics will positively impact autism research in Mali. Our approach can be applied to another neuropsychiatric disorder.

Keywords: bioinformatics, endonucleases, autism, Sanger sequencing, point mutations

Procedia PDF Downloads 83
4265 Study on Co-Relation of Prostate Specific Antigen with Metastatic Bone Disease in Prostate Cancer on Skeletal Scintigraphy

Authors: Muhammad Waleed Asfandyar, Akhtar Ahmed, Syed Adib-ul-Hasan Rizvi

Abstract:

Objective: To evaluate the ability of serum concentration of prostate specific antigen between two cutting points considering it as a predictor of skeletal metastasis on bone scintigraphy in men with prostate cancer. Settings: This study was carried out in department of Nuclear Medicine at Sindh Institute of Urology and Transplantation (SIUT) Karachi, Pakistan. Materials and Method: From August 2013 to November 2013, forty two (42) consecutive patients with prostate cancer who underwent technetium-99m methylene diphosphonate (Tc-99mMDP) whole body bone scintigraphy were prospectively analyzed. The information was collected from the scintigraphic database at a Nuclear medicine department Sindh institute of urology and transplantation Karachi Pakistan. Patients who did not have a serum PSA concentration available within 1 month before or after the time of performing the Tc-99m MDP whole body bone scintigraphy were excluded from this study. A whole body bone scintigraphy scan (from the toes to top of the head) was performed using a whole-body Moving gamma camera technique (anterior and posterior) 2–4 hours after intravenous injection of 20 mCi of Tc-99m MDP. In addition, all patients necessarily have a pathological report available. Bony metastases were determined from the bone scan studies and no further correlation with histopathology or other imaging modalities were performed. To preserve patient confidentiality, direct patient identifiers were not collected. In all the patients, Prostate specific antigen values and skeletal scintigraphy were evaluated. Results: The mean age, mean PSA, and incidence of bone metastasis on bone scintigraphy were 68.35 years, 370.51 ng/mL and 19/42 (45.23%) respectively. According to PSA levels, patients were divided into 5 groups < 10ng/mL (10/42), 10-20 ng/mL (5/42), 20-50 ng/mL (2/42), 50-100 (3/42), 100- 500ng/mL (3/42) and more than 500ng/mL (0/42) presenting negative bone scan. The incidence of positive bone scan (%) for bone metastasis for each group were O1 patient (5.26%), 0%, 03 patients (15.78%), 01 patient (5.26%), 04 patients (21.05%), and 10 patients (52.63%) respectively. From the 42 patients 19 (45.23%) presented positive scintigraphic examination for the presence of bone metastasis. 1 patient presented bone metastasis on bone scintigraphy having PSA level less than 10ng/mL, and in only 1 patient (5.26%) with bone metastasis PSA concentration was less than 20 ng/mL. therefore, when the cutting point adopted for PSA serum concentration was 10ng/mL, a negative predictive value for bone metastasis was 95% with sensitivity rates 94.74% and the positive predictive value and specificities of the method were 56.53% and 43.48% respectively. When the cutting point of PSA serum concentration was 20ng/mL the observed results for Positive predictive value and specificity were (78.27% and 65.22% respectively) whereas negative predictive value and sensitivity stood (100% and 95%) respectively. Conclusion: Results of our study allow us to conclude that serum PSA concentration of higher than 20ng/mL was the most accurate cutting point than a serum concentration of PSA higher than 10ng/mL to predict metastasis in radionuclide bone scintigraphy. In this way, unnecessary cost can be avoided, since a considerable part of prostate adenocarcinomas present low serum PSA levels less than 20 ng/mL and for these cases radionuclide bone scintigraphy could be unnecessary.

Keywords: bone scan, cut off value, prostate specific antigen value, scintigraphy

Procedia PDF Downloads 319
4264 An Exponential Field Path Planning Method for Mobile Robots Integrated with Visual Perception

Authors: Magdy Roman, Mostafa Shoeib, Mostafa Rostom

Abstract:

Global vision, whether provided by overhead fixed cameras, on-board aerial vehicle cameras, or satellite images can always provide detailed information on the environment around mobile robots. In this paper, an intelligent vision-based method of path planning and obstacle avoidance for mobile robots is presented. The method integrates visual perception with a new proposed field-based path-planning method to overcome common path-planning problems such as local minima, unreachable destination and unnecessary lengthy paths around obstacles. The method proposes an exponential angle deviation field around each obstacle that affects the orientation of a close robot. As the robot directs toward, the goal point obstacles are classified into right and left groups, and a deviation angle is exponentially added or subtracted to the orientation of the robot. Exponential field parameters are chosen based on Lyapunov stability criterion to guarantee robot convergence to the destination. The proposed method uses obstacles' shape and location, extracted from global vision system, through a collision prediction mechanism to decide whether to activate or deactivate obstacles field. In addition, a search mechanism is developed in case of robot or goal point is trapped among obstacles to find suitable exit or entrance. The proposed algorithm is validated both in simulation and through experiments. The algorithm shows effectiveness in obstacles' avoidance and destination convergence, overcoming common path planning problems found in classical methods.

Keywords: path planning, collision avoidance, convergence, computer vision, mobile robots

Procedia PDF Downloads 194
4263 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 133
4262 Predicting Match Outcomes in Team Sport via Machine Learning: Evidence from National Basketball Association

Authors: Jacky Liu

Abstract:

This paper develops a team sports outcome prediction system with potential for wide-ranging applications across various disciplines. Despite significant advancements in predictive analytics, existing studies in sports outcome predictions possess considerable limitations, including insufficient feature engineering and underutilization of advanced machine learning techniques, among others. To address these issues, we extend the Sports Cross Industry Standard Process for Data Mining (SRP-CRISP-DM) framework and propose a unique, comprehensive predictive system, using National Basketball Association (NBA) data as an example to test this extended framework. Our approach follows a holistic methodology in feature engineering, employing both Time Series and Non-Time Series Data, as well as conducting Explanatory Data Analysis and Feature Selection. Furthermore, we contribute to the discourse on target variable choice in team sports outcome prediction, asserting that point spread prediction yields higher profits as opposed to game-winner predictions. Using machine learning algorithms, particularly XGBoost, results in a significant improvement in predictive accuracy of team sports outcomes. Applied to point spread betting strategies, it offers an astounding annual return of approximately 900% on an initial investment of $100. Our findings not only contribute to academic literature, but have critical practical implications for sports betting. Our study advances the understanding of team sports outcome prediction a burgeoning are in complex system predictions and pave the way for potential profitability and more informed decision making in sports betting markets.

Keywords: machine learning, team sports, game outcome prediction, sports betting, profits simulation

Procedia PDF Downloads 102
4261 Implementation of Fuzzy Version of Block Backward Differentiation Formulas for Solving Fuzzy Differential Equations

Authors: Z. B. Ibrahim, N. Ismail, K. I. Othman

Abstract:

Fuzzy Differential Equations (FDEs) play an important role in modelling many real life phenomena. The FDEs are used to model the behaviour of the problems that are subjected to uncertainty, vague or imprecise information that constantly arise in mathematical models in various branches of science and engineering. These uncertainties have to be taken into account in order to obtain a more realistic model and many of these models are often difficult and sometimes impossible to obtain the analytic solutions. Thus, many authors have attempted to extend or modified the existing numerical methods developed for solving Ordinary Differential Equations (ODEs) into fuzzy version in order to suit for solving the FDEs. Therefore, in this paper, we proposed the development of a fuzzy version of three-point block method based on Block Backward Differentiation Formulas (FBBDF) for the numerical solution of first order FDEs. The three-point block FBBDF method are implemented in uniform step size produces three new approximations simultaneously at each integration step using the same back values. Newton iteration of the FBBDF is formulated and the implementation is based on the predictor and corrector formulas in the PECE mode. For greater efficiency of the block method, the coefficients of the FBBDF are stored at the start of the program. The proposed FBBDF is validated through numerical results on some standard problems found in the literature and comparisons are made with the existing fuzzy version of the Modified Simpson and Euler methods in terms of the accuracy of the approximated solutions. The numerical results show that the FBBDF method performs better in terms of accuracy when compared to the Euler method when solving the FDEs.

Keywords: block, backward differentiation formulas, first order, fuzzy differential equations

Procedia PDF Downloads 319
4260 A Novel Method for Face Detection

Authors: H. Abas Nejad, A. R. Teymoori

Abstract:

Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.

Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model

Procedia PDF Downloads 338
4259 A Framework for Teaching the Intracranial Pressure Measurement through an Experimental Model

Authors: Christina Klippel, Lucia Pezzi, Silvio Neto, Rafael Bertani, Priscila Mendes, Flavio Machado, Aline Szeliga, Maria Cosendey, Adilson Mariz, Raquel Santos, Lys Bendett, Pedro Velasco, Thalita Rolleigh, Bruna Bellote, Daria Coelho, Bruna Martins, Julia Almeida, Juliana Cerqueira

Abstract:

This project presents a framework for teaching intracranial pressure monitoring (ICP) concepts using a low-cost experimental model in a neurointensive care education program. Data concerning ICP monitoring contribute to the patient's clinical assessment and may dictate the course of action of a health team (nursing, medical staff) and influence decisions to determine the appropriate intervention. This study aims to present a safe method for teaching ICP monitoring to medical students in a Simulation Center. Methodology: Medical school teachers, along with students from the 4th year, built an experimental model for teaching ICP measurement. The model consists of a mannequin's head with a plastic bag inside simulating the cerebral ventricle and an inserted ventricular catheter connected to the ICP monitoring system. The bag simulating the ventricle can also be changed for others containing bloody or infected simulated cerebrospinal fluid. On the mannequin's ear, there is a blue point indicating the right place to set the "zero point" for accurate pressure reading. The educational program includes four steps: 1st - Students receive a script on ICP measurement for reading before training; 2nd - Students watch a video about the subject created in the Simulation Center demonstrating each step of the ICP monitoring and the proper care, such as: correct positioning of the patient, anatomical structures to establish the zero point for ICP measurement and a secure range of ICP; 3rd - Students train the procedure in the model. Teachers help students during training; 4th - Student assessment based on a checklist form. Feedback and correction of wrong actions. Results: Students expressed interest in learning ICP monitoring. Tests concerning the hit rate are still being performed. ICP's final results and video will be shown at the event. Conclusion: The study of intracranial pressure measurement based on an experimental model consists of an effective and controlled method of learning and research, more appropriate for teaching neurointensive care practices. Assessment based on a checklist form helps teachers keep track of student learning progress. This project offers medical students a safe method to develop intensive neurological monitoring skills for clinical assessment of patients with neurological disorders.

Keywords: neurology, intracranial pressure, medical education, simulation

Procedia PDF Downloads 172
4258 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform

Authors: S. Hutasavi, D. Chen

Abstract:

The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.

Keywords: built-up area extraction, google earth engine, adaptive thresholding method, rapid mapping

Procedia PDF Downloads 126
4257 Towards a Robust Patch Based Multi-View Stereo Technique for Textureless and Occluded 3D Reconstruction

Authors: Ben Haines, Li Bai

Abstract:

Patch based reconstruction methods have been and still are one of the top performing approaches to 3D reconstruction to date. Their local approach to refining the position and orientation of a patch, free of global minimisation and independent of surface smoothness, make patch based methods extremely powerful in recovering fine grained detail of an objects surface. However, patch based approaches still fail to faithfully reconstruct textureless or highly occluded surface regions thus though performing well under lab conditions, deteriorate in industrial or real world situations. They are also computationally expensive. Current patch based methods generate point clouds with holes in texturesless or occluded regions that require expensive energy minimisation techniques to fill and interpolate a high fidelity reconstruction. Such shortcomings hinder the adaptation of the methods for industrial applications where object surfaces are often highly textureless and the speed of reconstruction is an important factor. This paper presents on-going work towards a multi-resolution approach to address the problems, utilizing particle swarm optimisation to reconstruct high fidelity geometry, and increasing robustness to textureless features through an adapted approach to the normalised cross correlation. The work also aims to speed up the reconstruction using advances in GPU technologies and remove the need for costly initialization and expansion. Through the combination of these enhancements, it is the intention of this work to create denser patch clouds even in textureless regions within a reasonable time. Initial results show the potential of such an approach to construct denser point clouds with a comparable accuracy to that of the current top-performing algorithms.

Keywords: 3D reconstruction, multiview stereo, particle swarm optimisation, photo consistency

Procedia PDF Downloads 203
4256 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 120
4255 Lie Symmetry of a Nonlinear System Characterizing Endemic Malaria

Authors: Maba Boniface Matadi

Abstract:

This paper analyses the model of Malaria endemic from the point of view of the group theoretic approach. The study identified new independent variables that lead to the transformation of the nonlinear model. Furthermore, corresponding determining equations were constructed, and new symmetries were found. As a result, the findings of the study demonstrate of the integrability of the model to present an invariant solution for the Malaria model.

Keywords: group theory, lie symmetry, invariant solutions, malaria

Procedia PDF Downloads 109
4254 Achieving Appropriate Use of Antibiotics through Pharmacists’ Intervention at Practice Point: An Indian Study Report

Authors: Parimalakrishnan Sundararjan, Madheswaran Murugan, Dhanya Dharman, Yatindra Kumar, Sudhir Singh Gangwar, Guru Prasad Mohanta

Abstract:

Antibiotic resistance AR is a global issue, India started to redress the issues of antibiotic resistance late and it plans to have: active surveillance of microbial resistance and promote appropriate use of antibiotics. The present study attempted to achieve appropriate use of antibiotics through pharmacists’ intervention at practice point. In a quasi-experimental prospective cohort study, the cases with bacteremia from four hospitals were identified during 2015 and 2016 for intervention. The pharmacists centered intervention: active screening of each prescription and comparing with the selection of antibiotics with susceptibility of the bacteria. Wherever irrationality noticed, it was brought to the notice of the treating physician for making changes. There were two groups: intervention group and control group without intervention. The active screening and intervention in 915 patients has reduced therapeutic regimen time in patients with bacteremia. The intervention group showed the decreased duration of hospital stay 3.4 days from 5.1 days. Further, multivariate modeling of patients who were in control group showed that patients in the intervention group had a significant decrease in both duration of hospital stay and infection-related mortality. Unlike developed countries, pharmacists are not active partners in patient care in India. This unique attempt of pharmacist’ invention was planned in consultation with hospital authorities which proved beneficial in terms of reducing the duration of treatment, hospital stay, and infection-related mortality. This establishes the need for a collaborative decision making among the health workforce in patient care at least for promoting rational use of antibiotics, an attempt to combat resistance.

Keywords: antibiotics resistance, intervention, bacteremia, multivariate modeling

Procedia PDF Downloads 182
4253 Aerosol Characterization in a Coastal Urban Area in Rimini, Italy

Authors: Dimitri Bacco, Arianna Trentini, Fabiana Scotto, Flavio Rovere, Daniele Foscoli, Cinzia Para, Paolo Veronesi, Silvia Sandrini, Claudia Zigola, Michela Comandini, Marilena Montalti, Marco Zamagni, Vanes Poluzzi

Abstract:

The Po Valley, in the north of Italy, is one of the most polluted areas in Europe. The air quality of the area is linked not only to anthropic activities but also to its geographical characteristics and stagnant weather conditions with frequent inversions, especially in the cold season. Even the coastal areas present high values of particulate matter (PM10 and PM2.5) because the area closed between the Adriatic Sea and the Apennines does not favor the dispersion of air pollutants. The aim of the present work was to identify the main sources of particulate matter in Rimini, a tourist city in northern Italy. Two sampling campaigns were carried out in 2018, one in winter (60 days) and one in summer (30 days), in 4 sites: an urban background, a city hotspot, a suburban background, and a rural background. The samples are characterized by the concentration of the ionic composition of the particulates and of the main a hydro-sugars, in particular levoglucosan, a marker of the biomass burning, because one of the most important anthropogenic sources in the area, both in the winter and surprisingly even in the summer, is the biomass burning. Furthermore, three sampling points were chosen in order to maximize the contribution of a specific biomass source: a point in a residential area (domestic cooking and domestic heating), a point in the agricultural area (weed fires), and a point in the tourist area (restaurant cooking). In these sites, the analyzes were enriched with the quantification of the carbonaceous component (organic and elemental carbon) and with measurement of the particle number concentration and aerosol size distribution (6 - 600 nm). The results showed a very significant impact of the combustion of biomass due to domestic heating in the winter period, even though many intense peaks were found attributable to episodic wood fires. In the summer season, however, an appreciable signal was measured linked to the combustion of biomass, although much less intense than in winter, attributable to domestic cooking activities. Further interesting results were the verification of the total absence of sea salt's contribution in the particulate with the lower diameter (PM2.5), and while in the PM10, the contribution becomes appreciable only in particular wind conditions (high wind from north, north-east). Finally, it is interesting to note that in a small town, like Rimini, in summer, the traffic source seems to be even more relevant than that measured in a much larger city (Bologna) due to tourism.

Keywords: aerosol, biomass burning, seacoast, urban area

Procedia PDF Downloads 128
4252 The Biomechanical Analysis of Pelvic Osteotomies Applied for Developmental Dysplasia of the Hip Treatment in Pediatric Patients

Authors: Suvorov Vasyl, Filipchuk Viktor

Abstract:

Developmental Dysplasia of the Hip (DDH) is a frequent pathology in pediatric orthopedist’s practice. Neglected or residual cases of DDH in walking patients are usually treated using pelvic osteotomies. Plastic changes take place in hinge points due to acetabulum reorientation during surgery. Classically described hinge points and a traditional division of pelvic osteotomies on reshaping and reorientation are currently debated. The purpose of this article was to evaluate biomechanical changes during the most commonly used pelvic osteotomies (Salter, Dega, Pemberton) for DDH treatment in pediatric patients. Methods: virtual pelvic models of 2- and 6-years old patients were created, material properties were assigned, pelvic osteotomies were simulated and biomechanical changes were evaluated using finite element analysis (FEA). Results: it was revealed that the patient's age has an impact on pelvic bones and cartilages density (in younger patients the pelvic elements are more pliable - p<0.05). Stress distribution after each of the abovementioned pelvic osteotomy was assessed in 2- and 6-years old patients’ pelvic models; hinge points were evaluated. The new term "restriction point" was introduced, which means a place where restriction of acetabular deformity correction occurs. Pelvic ligaments attachment points were mainly these restriction points. Conclusions: it was found out that there are no purely reshaping and reorientation pelvic osteotomies as previously believed; the pelvic ring acts as a unit in carrying out the applied load. Biomechanical overload of triradiate cartilage during Salter osteotomy in 2-years old patient and in 2- and 6-years old patients during Pemberton osteotomy was revealed; overload of the posterior cortical layer in the greater sciatic notch in 2-years old patient during Dega osteotomy was revealed. Level of Evidence – Level IV, prognostic.

Keywords: developmental dysplasia of the hip, pelvic osteotomy, finite element analysis, hinge point, biomechanics

Procedia PDF Downloads 98
4251 Three-Dimensional Measurement and Analysis of Facial Nerve Recess

Authors: Kang Shuo-Shuo, Li Jian-Nan, Yang Shiming

Abstract:

Purpose: The three-dimensional anatomical structure of the facial nerve recess and its relationship were measured by high-resolution temporal bone CT to provide imaging reference for cochlear implant operation. Materials and Methods: By analyzing the high-resolution CT of 160 cases (320 pleural ears) of the temporal bone, the following parameters were measured at the axial window niche level: 1. The distance between the facial nerve and chordae tympani nerve d1; 2. Distance between the facial nerve and circular window niche d2; 3. The relative Angle between the facial nerve and the circular window niche a; 4. Distance between the middle point of the face recess and the circular window niche d3; 5. The relative angle between the middle point of the face recess and the circular window niche b. Factors that might influence the anatomy of the facial recess were recorded, including the patient's sex, age, and anatomical variation (e.g., vestibular duct dilation, mastoid gas type, mothoid sinus advancement, jugular bulbar elevation, etc.), and the correlation between these factors and the measured facial recess parameters was analyzed. Result: The mean value of face-drum distance d1 is (3.92 ± 0.26) mm, the mean value of face-niche distance d2 is (5.95 ± 0.62) mm, the mean value of face-niche Angle a is (94.61 ± 9.04) °, and the mean value of fossa - niche distance d3 is (6.46 ± 0.63) mm. The average fossa-niche Angle b was (113.47 ± 7.83) °. Gender, age, and anterior sigmoid sinus were the three factors affecting the width of the opposite recess d1, the Angle of the opposite nerve relative to the circular window niche a, and the Angle of the facial recess relative to the circular window niche b. Conclusion: High-resolution temporal bone CT before cochlear implantation can show the important anatomical relationship of the facial nerve recess, and the measurement results have clinical reference value for the operation of cochlear implantation.

Keywords: cochlear implantation, recess of facial nerve, temporal bone CT, three-dimensional measurement

Procedia PDF Downloads 16
4250 High-Resolution Spatiotemporal Retrievals of Aerosol Optical Depth from Geostationary Satellite Using Sara Algorithm

Authors: Muhammad Bilal, Zhongfeng Qiu

Abstract:

Aerosols, suspended particles in the atmosphere, play an important role in the earth energy budget, climate change, degradation of atmospheric visibility, urban air quality, and human health. To fully understand aerosol effects, retrieval of aerosol optical properties such as aerosol optical depth (AOD) at high spatiotemporal resolution is required. Therefore, in the present study, hourly AOD observations at 500 m resolution were retrieved from the geostationary ocean color imager (GOCI) using the simplified aerosol retrieval algorithm (SARA) over the urban area of Beijing for the year 2016. The SARA requires top-of-the-atmosphere (TOA) reflectance, solar and sensor geometry information and surface reflectance observations to retrieve an accurate AOD. For validation of the GOCI retrieved AOD, AOD measurements were obtained from the aerosol robotic network (AERONET) version 3 level 2.0 (cloud-screened and quality assured) data. The errors and uncertainties were reported using the root mean square error (RMSE), relative percent mean error (RPME), and the expected error (EE = ± (0.05 + 0.15AOD). Results showed that the high spatiotemporal GOCI AOD observations were well correlated with the AERONET AOD measurements with a correlation coefficient (R) of 0.92, RMSE of 0.07, and RPME of 5%, and 90% of the observations were within the EE. The results suggested that the SARA is robust and has the ability to retrieve high-resolution spatiotemporal AOD observations over the urban area using the geostationary satellite.

Keywords: AEORNET, AOD, SARA, GOCI, Beijing

Procedia PDF Downloads 171
4249 Social Responsibility and Environmental Issues Addressed by Businesses in Romania

Authors: Daniela Gradinaru, Iuliana Georgescu, Loredana Hutanu (Toma), Mihai-Bogdan Afrasinei

Abstract:

This article aims to analyze the situation of Romanian companies from an environmental point of view. Environmental issues are addressed very often nowadays, and they reach and affect every domain, including the economical one. Implementing an environmental management system will not only help the companies to comply with laws and regulations, but, above all, will offer them an important competitive advantage.

Keywords: environmental management system, environmental reporting, environmental expenses, sustainable development

Procedia PDF Downloads 415
4248 Security Design of Root of Trust Based on RISC-V

Authors: Kang Huang, Wanting Zhou, Shiwei Yuan, Lei Li

Abstract:

Since information technology develops rapidly, the security issue has become an increasingly critical for computer system. In particular, as cloud computing and the Internet of Things (IoT) continue to gain widespread adoption, computer systems need to new security threats and attacks. The Root of Trust (RoT) is the foundation for providing basic trusted computing, which is used to verify the security and trustworthiness of other components. Design a reliable Root of Trust and guarantee its own security are essential for improving the overall security and credibility of computer systems. In this paper, we discuss the implementation of self-security technology based on the RISC-V Root of Trust at the hardware level. To effectively safeguard the security of the Root of Trust, researches on security safeguard technology on the Root of Trust have been studied. At first, a lightweight and secure boot framework is proposed as a secure mechanism. Secondly, two kinds of memory protection mechanism are built to against memory attacks. Moreover, hardware implementation of proposed method has been also investigated. A series of experiments and tests have been carried on to verify to effectiveness of the proposed method. The experimental results demonstrated that the proposed approach is effective in verifying the integrity of the Root of Trust’s own boot rom, user instructions, and data, ensuring authenticity and enabling the secure boot of the Root of Trust’s own system. Additionally, our approach provides memory protection against certain types of memory attacks, such as cache leaks and tampering, and ensures the security of root-of-trust sensitive information, including keys.

Keywords: root of trust, secure boot, memory protection, hardware security

Procedia PDF Downloads 216
4247 Biodiesel Production from Yellow Oleander Seed Oil

Authors: S. Rashmi, Devashish Das, N. Spoorthi, H. V. Manasa

Abstract:

Energy is essential and plays an important role for overall development of a nation. The global economy literally runs on energy. The use of fossil fuels as energy is now widely accepted as unsustainable due to depleting resources and also due to the accumulation of greenhouse gases in the environment, renewable and carbon neutral biodiesel are necessary for environment and economic sustainability. Unfortunately biodiesel produced from oil crop, waste cooking oil and animal fats are not able to replace fossil fuel. Fossil fuels remain the dominant source of primary energy, accounting for 84% of the overall increase in demand. Today biodiesel has come to mean a very specific chemical modification of natural oils. Objectives: To produce biodiesel from yellow oleander seed oil, to test the yield of biodiesel using different types of catalyst (KOH & NaOH). Methodology: Oil is extracted from dried yellow oleander seeds using Soxhlet extractor and oil expeller (bulk). The FFA content of the oil is checked and depending on the FFA value either two steps or single step process is followed to produce biodiesel. Two step processes includes esterfication and transesterification, single step includes only transesterification. The properties of biodiesel are checked. Engine test is done for biodiesel produced. Result: It is concluded that biodiesel quality parameters such as yield(85% & 90%), flash point(1710C & 1760C),fire point(1950C & 1980C), viscosity(4.9991 and 5.21 mm2/s) for the biodiesel from seed oil of Thevetiaperuviana produced by using KOH & NaOH respectively. Thus the seed oil of Thevetiaperuviana is a viable feedstock for good quality fuel.The outcomes of our project are a substitute for conventional fuel, to reduce petro diesel requirement,improved performance in terms of emissions. Future prospects: Optimization of biodiesel production using response surface method.

Keywords: yellow oleander seeds, biodiesel, quality parameters, renewable sources

Procedia PDF Downloads 446
4246 Fabrication of a Potential Point-of-Care Device for Hemoglobin A1c: A Lateral Flow Immunosensor

Authors: Shu Hwang Ang, Choo Yee Yu, Geik Yong Ang, Yean Yean Chan, Yatimah Binti Alias, And Sook Mei Khor

Abstract:

With the high prevalence of Type 2 diabetes mellitus across the world, the morbidities and mortalities associated with Type 2 diabetes have significant impact on the production line for a nation. With routine scheduled clinical visits to manage Type 2 diabetes, diabetic patients with hectic lifestyles can have low clinical compliance. Hence, it often decreases the effectiveness of diabetic management personalized for each diabetic patient. Here, we report a useful developed point-of-care (POC) device that detect glycated hemoglobin (HbA1c, biomarker for long-term Type 2 diabetic management). In fact, the established POC devices certified to be used in clinical setting are not only expensive ($ 8 to $10 per test), they also require skillful practitioners to perform sampling and interpretation. As a paper-based biosensor, the developed HbA1c biosensor utilized lateral flow principle to offer an alternative for cost-effective (approximately $2 per test) and end-user friendly device for household testing. Requiring as little as 2 L of finger-picked blood, the test can be performed at the household with just simple dilution and washings. With visual interpretation of numbers of test lines shown on the developed biosensor, it can be interpreted as easy as a urine pregnancy test, aided with scale of intensity provided. In summary, the developed HbA1c immunosensor has been tested to have high selectivity towards HbA1c, and is stable with reasonably good performance in clinical testing. Therefore, our developed HbA1c immunosensor has high potential to be an effective diabetic management tool to increase patient compliance and thus contain the progression of the diabetes.

Keywords: blood, glycated hemoglobin (HbA1c), lateral flow, type 2 diabetes mellitus

Procedia PDF Downloads 528
4245 Uterine Cervical Cancer; Early Treatment Assessment with T2- And Diffusion-Weighted MRI

Authors: Susanne Fridsten, Kristina Hellman, Anders Sundin, Lennart Blomqvist

Abstract:

Background: Patients diagnosed with locally advanced cervical carcinoma are treated with definitive concomitant chemo-radiotherapy. Treatment failure occurs in 30-50% of patients with very poor prognoses. The treatment is standardized with risk for both over-and undertreatment. Consequently, there is a great need for biomarkers able to predict therapy outcomes to allow for individualized treatment. Aim: To explore the role of T2- and diffusion-weighted magnetic resonance imaging (MRI) for early prediction of therapy outcome and the optimal time point for assessment. Methods: A pilot study including 15 patients with cervical carcinoma stage IIB-IIIB (FIGO 2009) undergoing definitive chemoradiotherapy. All patients underwent MRI four times, at baseline, 3 weeks, 5 weeks, and 12 weeks after treatment started. Tumour size, size change (∆size), visibility on diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC) and change of ADC (∆ADC) at the different time points were recorded. Results: 7/15 patients relapsed during the study period, referred to as "poor prognosis", PP, and the remaining eight patients are referred to "good prognosis", GP. The tumor size was larger at all time points for PP than for GP. The ∆size between any of the four-time points was the same for PP and GP patients. The sensitivity and specificity to predict prognostic group depending on a remaining tumor on DWI were highest at 5 weeks and 83% (5/6) and 63% (5/8), respectively. The combination of tumor size at baseline and remaining tumor on DWI at 5 weeks in ROC analysis reached an area under the curve (AUC) of 0.83. After 12 weeks, no remaining tumor was seen on DWI among patients with GP, as opposed to 2/7 PP patients. Adding ADC to the tumor size measurements did not improve the predictive value at any time point. Conclusion: A large tumor at baseline MRI combined with a remaining tumor on DWI at 5 weeks predicted a poor prognosis.

Keywords: chemoradiotherapy, diffusion-weighted imaging, magnetic resonance imaging, uterine cervical carcinoma

Procedia PDF Downloads 143
4244 Vulnerability Assessment of Reinforced Concrete Frames Based on Inelastic Spectral Displacement

Authors: Chao Xu

Abstract:

Selecting ground motion intensity measures reasonably is one of the very important issues to affect the input ground motions selecting and the reliability of vulnerability analysis results. In this paper, inelastic spectral displacement is used as an alternative intensity measure to characterize the ground motion damage potential. The inelastic spectral displacement is calculated based modal pushover analysis and inelastic spectral displacement based incremental dynamic analysis is developed. Probability seismic demand analysis of a six story and an eleven story RC frame are carried out through cloud analysis and advanced incremental dynamic analysis. The sufficiency and efficiency of inelastic spectral displacement are investigated by means of regression and residual analysis, and compared with elastic spectral displacement. Vulnerability curves are developed based on inelastic spectral displacement. The study shows that inelastic spectral displacement reflects the impact of different frequency components with periods larger than fundamental period on inelastic structural response. The damage potential of ground motion on structures with fundamental period prolonging caused by structural soften can be caught by inelastic spectral displacement. To be compared with elastic spectral displacement, inelastic spectral displacement is a more sufficient and efficient intensity measure, which reduces the uncertainty of vulnerability analysis and the impact of input ground motion selection on vulnerability analysis result.

Keywords: vulnerability, probability seismic demand analysis, ground motion intensity measure, sufficiency, efficiency, inelastic time history analysis

Procedia PDF Downloads 354
4243 Improving Cheon-Kim-Kim-Song (CKKS) Performance with Vector Computation and GPU Acceleration

Authors: Smaran Manchala

Abstract:

Homomorphic Encryption (HE) enables computations on encrypted data without requiring decryption, mitigating data vulnerability during processing. Usable Fully Homomorphic Encryption (FHE) could revolutionize secure data operations across cloud computing, AI training, and healthcare, providing both privacy and functionality, however, the computational inefficiency of schemes like Cheon-Kim-Kim-Song (CKKS) hinders their widespread practical use. This study focuses on optimizing CKKS for faster matrix operations through the implementation of vector computation parallelization and GPU acceleration. The variable effects of vector parallelization on GPUs were explored, recognizing that while parallelization typically accelerates operations, it could introduce overhead that results in slower runtimes, especially in smaller, less computationally demanding operations. To assess performance, two neural network models, MLPN and CNN—were tested on the MNIST dataset using both ARM and x86-64 architectures, with CNN chosen for its higher computational demands. Each test was repeated 1,000 times, and outliers were removed via Z-score analysis to measure the effect of vector parallelization on CKKS performance. Model accuracy was also evaluated under CKKS encryption to ensure optimizations did not compromise results. According to the results of the trail runs, applying vector parallelization had a 2.63X efficiency increase overall with a 1.83X performance increase for x86-64 over ARM architecture. Overall, these results suggest that the application of vector parallelization in tandem with GPU acceleration significantly improves the efficiency of CKKS even while accounting for vector parallelization overhead, providing impact in future zero trust operations.

Keywords: CKKS scheme, runtime efficiency, fully homomorphic encryption (FHE), GPU acceleration, vector parallelization

Procedia PDF Downloads 23