Search results for: computer course unit
975 Prediction of Changes in Optical Quality by Tissue Redness after Pterygium Surgery
Authors: Mohd Radzi Hilmi, Mohd Zulfaezal Che Azemin, Khairidzan Mohd Kamal, Azrin Esmady Ariffin, Mohd Izzuddin Mohd Tamrin, Norfazrina Abdul Gaffur, Tengku Mohd Tengku Sembok
Abstract:
Purpose: The purpose of this study is to predict optical quality changes after pterygium surgery using tissue redness grading. Methods: Sixty-eight primary pterygium participants were selected from patients who visited an ophthalmology clinic. We developed a semi-automated computer program to measure the pterygium fibrovascular redness from digital pterygium images. The outcome of this software is a continuous scale grading of 1 (minimum redness) to 3 (maximum redness). The region of interest (ROI) was selected manually using the software. Reliability was determined by repeat grading of all 68 images and its association with contrast sensitivity function (CSF) and visual acuity (VA) was examined. Results: The mean and standard deviation of redness of the pterygium fibrovascular images was 1.88 ± 0.55. Intra- and inter-grader reliability estimates were high with intraclass correlation ranging from 0.97 to 0.98. The new grading was positively associated with CSF (p<0.01) and VA (p<0.01). The redness grading was able to predict 25% and 23% of the variance in the CSF and the VA respectively. Conclusions: The new grading of pterygium fibrovascular redness can be reliably measured from digital images and show a good correlation with CSF and VA. The redness grading can be used in addition to the existing pterygium grading.Keywords: contrast sensitivity, pterygium, redness, visual acuity
Procedia PDF Downloads 515974 Proposing Sky Exposure Plane Concept for Urban Open Public Spaces in Gulseren Street
Authors: Pooya Lotfabadi
Abstract:
In today's world, sustainability is a critical concern, particularly in the building industry, which is a significant contributor to energy consumption. Buildings must be considered in relation to their urban surroundings, highlighting the importance of collaboration between architecture and urban design. Natural light plays a vital role in enhancing a building's thermal and visual comfort and promoting the well-being of outdoor residents. Therefore, architects and urban designers are responsible for maximizing sunlight exposure in urban settings. Key factors such as building height and orientation are essential for optimizing natural light. Without proper attention, standalone projects can negatively affect their urban environment. Regulations like the Sky Exposure Plane- a virtual sloping plane that determines minimum building heights and spacing- serve as effective tools for guiding urban development. This study aims to define the Sky Exposure Plane in public open spaces, proposing an optimal angle for buildings on Gulseren Street in Famagusta, North Cyprus. Utilizing computer simulations, the research examines the role of sunlight in public streets and offers guidelines to improve natural lighting in urban planning.Keywords: public open space, sky exposure plane, street natural lighting, sustainable urban design
Procedia PDF Downloads 21973 Developing the P1-P7 Management and Analysis Software for Thai Child Evaluation (TCE) of Food and Nutrition Status
Authors: S. Damapong, C. Kingkeow, W. Kongnoo, P. Pattapokin, S. Pruenglamphu
Abstract:
As the presence of Thai children double burden malnutrition, we conducted a project to promote holistic age-appropriate nutrition for Thai children. Researchers developed P1-P7 computer software for managing and analyzing diverse types of collected data. The study objectives were: i) to use software to manage and analyze the collected data, ii) to evaluate the children nutritional status and their caretakers’ nutrition practice to create regulations for improving nutrition. Data were collected by means of questionnaires, called P1-P7. P1, P2 and P5 were for children and caretakers, and others were for institutions. The children nutritional status, height-for-age, weight-for-age, and weight-for-height standards were calculated using Thai child z-score references. Institution evaluations consisted of various standard regulations including the use of our software. The results showed that the software was used in 44 out of 118 communities (37.3%), 57 out of 240 child development centers and nurseries (23.8%), and 105 out of 152 schools (69.1%). No major problems have been reported with the software, although user efficiency can be increased further through additional training. As the result, the P1-P7 software was used to manage and analyze nutritional status, nutrition behavior, and environmental conditions, in order to conduct Thai Child Evaluation (TCE). The software was most widely used in schools. Some aspects of P1-P7’s questionnaires could be modified to increase ease of use and efficiency.Keywords: P1-P7 software, Thai child evaluation, nutritional status, malnutrition
Procedia PDF Downloads 356972 Data Mining of Students' Performance Using Artificial Neural Network: Turkish Students as a Case Study
Authors: Samuel Nii Tackie, Oyebade K. Oyedotun, Ebenezer O. Olaniyi, Adnan Khashman
Abstract:
Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task; and the performances obtained from these networks evaluated in consideration of achieved recognition rates and training time.Keywords: artificial neural network, data mining, classification, students’ evaluation
Procedia PDF Downloads 613971 Numerical Study of Natural Convection in a Nanofluid-Filled Vertical Cylinder under an External Magnetic Field
Authors: M. Maache, R. Bessaih
Abstract:
In this study, the effect of the magnetic field direction on the free convection heat transfer in a vertical cylinder filled with an Al₂O₃ nanofluid is investigated numerically. The external magnetic field is applied in either direction axial and radial on a cylinder having an aspect ratio H/R0=5, bounded by the top and the bottom disks at temperatures Tc and Th and by an adiabatic side wall. The equations of continuity, Navier Stocks and energy are non-dimensionalized and then discretized by the finite volume method. A computer program based on the SIMPLER algorithm is developed and compared with the numerical results found in the literature. The numerical investigation is carried out for different governing parameters namely: The Hartmann number (Ha=0, 5, 10, …, 40), nanoparticles volume fraction (ϕ=0, 0.025, …,0.1) and Rayleigh number (Ra=103, Ra=104 and Ra=105). The behavior of average Nusselt number, streamlines and temperature contours are illustrated. The results revel that the average Nusselt number increases with an increase of the Rayleigh number but it decreases with an increase in the Hartmann number. Depending on the magnetic field direction and on the values of Hartmann and Rayleigh numbers, an increase of the solid volume fraction may result enhancement or deterioration of the heat transfer performance in the nanofluid.Keywords: natural convection, nanofluid, magnetic field, vertical cylinder
Procedia PDF Downloads 315970 The Use of Network Tool for Brain Signal Data Analysis: A Case Study with Blind and Sighted Individuals
Authors: Cleiton Pons Ferreira, Diana Francisca Adamatti
Abstract:
Advancements in computers technology have allowed to obtain information for research in biology and neuroscience. In order to transform the data from these surveys, networks have long been used to represent important biological processes, changing the use of this tools from purely illustrative and didactic to more analytic, even including interaction analysis and hypothesis formulation. Many studies have involved this application, but not directly for interpretation of data obtained from brain functions, asking for new perspectives of development in neuroinformatics using existent models of tools already disseminated by the bioinformatics. This study includes an analysis of neurological data through electroencephalogram (EEG) signals, using the Cytoscape, an open source software tool for visualizing complex networks in biological databases. The data were obtained from a comparative case study developed in a research from the University of Rio Grande (FURG), using the EEG signals from a Brain Computer Interface (BCI) with 32 eletrodes prepared in the brain of a blind and a sighted individuals during the execution of an activity that stimulated the spatial ability. This study intends to present results that lead to better ways for use and adapt techniques that support the data treatment of brain signals for elevate the understanding and learning in neuroscience.Keywords: neuroinformatics, bioinformatics, network tools, brain mapping
Procedia PDF Downloads 182969 Deep Learning Approach to Trademark Design Code Identification
Authors: Girish J. Showkatramani, Arthi M. Krishna, Sashi Nareddi, Naresh Nula, Aaron Pepe, Glen Brown, Greg Gabel, Chris Doninger
Abstract:
Trademark examination and approval is a complex process that involves analysis and review of the design components of the marks such as the visual representation as well as the textual data associated with marks such as marks' description. Currently, the process of identifying marks with similar visual representation is done manually in United States Patent and Trademark Office (USPTO) and takes a considerable amount of time. Moreover, the accuracy of these searches depends heavily on the experts determining the trademark design codes used to catalog the visual design codes in the mark. In this study, we explore several methods to automate trademark design code classification. Based on recent successes of convolutional neural networks in image classification, we have used several different convolutional neural networks such as Google’s Inception v3, Inception-ResNet-v2, and Xception net. The study also looks into other techniques to augment the results from CNNs such as using Open Source Computer Vision Library (OpenCV) to pre-process the images. This paper reports the results of the various models trained on year of annotated trademark images.Keywords: trademark design code, convolutional neural networks, trademark image classification, trademark image search, Inception-ResNet-v2
Procedia PDF Downloads 232968 Enhancing Precision Agriculture through Object Detection Algorithms: A Study of YOLOv5 and YOLOv8 in Detecting Armillaria spp.
Authors: Christos Chaschatzis, Chrysoula Karaiskou, Pantelis Angelidis, Sotirios K. Goudos, Igor Kotsiuba, Panagiotis Sarigiannidis
Abstract:
Over the past few decades, the rapid growth of the global population has led to the need to increase agricultural production and improve the quality of agricultural goods. There is a growing focus on environmentally eco-friendly solutions, sustainable production, and biologically minimally fertilized products in contemporary society. Precision agriculture has the potential to incorporate a wide range of innovative solutions with the development of machine learning algorithms. YOLOv5 and YOLOv8 are two of the most advanced object detection algorithms capable of accurately recognizing objects in real time. Detecting tree diseases is crucial for improving the food production rate and ensuring sustainability. This research aims to evaluate the efficacy of YOLOv5 and YOLOv8 in detecting the symptoms of Armillaria spp. in sweet cherry trees and determining their health status, with the goal of enhancing the robustness of precision agriculture. Additionally, this study will explore Computer Vision (CV) techniques with machine learning algorithms to improve the detection process’s efficiency.Keywords: Armillaria spp., machine learning, precision agriculture, smart farming, sweet cherries trees, YOLOv5, YOLOv8
Procedia PDF Downloads 113967 Examining Influence of The Ultrasonic Power and Frequency on Microbubbles Dynamics Using Real-Time Visualization of Synchrotron X-Ray Imaging: Application to Membrane Fouling Control
Authors: Masoume Ehsani, Ning Zhu, Huu Doan, Ali Lohi, Amira Abdelrasoul
Abstract:
Membrane fouling poses severe challenges in membrane-based wastewater treatment applications. Ultrasound (US) has been considered an effective fouling remediation technique in filtration processes. Bubble cavitation in the liquid medium results from the alternating rarefaction and compression cycles during the US irradiation at sufficiently high acoustic pressure. Cavitation microbubbles generated under US irradiation can cause eddy current and turbulent flow within the medium by either oscillating or discharging energy to the system through microbubble explosion. Turbulent flow regime and shear forces created close to the membrane surface cause disturbing the cake layer and dislodging the foulants, which in turn improve the cleaning efficiency and filtration performance. Therefore, the number, size, velocity, and oscillation pattern of the microbubbles created in the liquid medium play a crucial role in foulant detachment and permeate flux recovery. The goal of the current study is to gain in depth understanding of the influence of the US power intensity and frequency on the microbubble dynamics and its characteristics generated under US irradiation. In comparison with other imaging techniques, the synchrotron in-line Phase Contrast Imaging technique at the Canadian Light Source (CLS) allows in-situ observation and real-time visualization of microbubble dynamics. At CLS biomedical imaging and therapy (BMIT) polychromatic beamline, the effective parameters were optimized to enhance the contrast gas/liquid interface for the accuracy of the qualitative and quantitative analysis of bubble cavitation within the system. With the high flux of photons and the high-speed camera, a typical high projection speed was achieved; and each projection of microbubbles in water was captured in 0.5 ms. ImageJ software was used for post-processing the raw images for the detailed quantitative analyses of microbubbles. The imaging has been performed under the US power intensity levels of 50 W, 60 W, and 100 W, in addition to the US frequency levels of 20 kHz, 28 kHz, and 40 kHz. For the duration of 2 seconds of imaging, the effect of the US power and frequency on the average number, size, and fraction of the area occupied by bubbles were analyzed. Microbubbles’ dynamics in terms of their velocity in water was also investigated. For the US power increase of 50 W to 100 W, the average bubble number and the average bubble diameter were increased from 746 to 880 and from 36.7 µm to 48.4 µm, respectively. In terms of the influence of US frequency, a fewer number of bubbles were created at 20 kHz (average of 176 bubbles rather than 808 bubbles at 40 kHz), while the average bubble size was significantly larger than that of 40 kHz (almost seven times). The majority of bubbles were captured close to the membrane surface in the filtration unit. According to the study observations, membrane cleaning efficiency is expected to be improved at higher US power and lower US frequency due to the higher energy release to the system by increasing the number of bubbles or growing their size during oscillation (optimum condition is expected to be at 20 kHz and 100 W).Keywords: bubble dynamics, cavitational bubbles, membrane fouling, ultrasonic cleaning
Procedia PDF Downloads 149966 Scoring System for the Prognosis of Sepsis Patients in Intensive Care Units
Authors: Javier E. García-Gallo, Nelson J. Fonseca-Ruiz, John F. Duitama-Munoz
Abstract:
Sepsis is a syndrome that occurs with physiological and biochemical abnormalities induced by severe infection and carries a high mortality and morbidity, therefore the severity of its condition must be interpreted quickly. After patient admission in an intensive care unit (ICU), it is necessary to synthesize the large volume of information that is collected from patients in a value that represents the severity of their condition. Traditional severity of illness scores seeks to be applicable to all patient populations, and usually assess in-hospital mortality. However, the use of machine learning techniques and the data of a population that shares a common characteristic could lead to the development of customized mortality prediction scores with better performance. This study presents the development of a score for the one-year mortality prediction of the patients that are admitted to an ICU with a sepsis diagnosis. 5650 ICU admissions extracted from the MIMICIII database were evaluated, divided into two groups: 70% to develop the score and 30% to validate it. Comorbidities, demographics and clinical information of the first 24 hours after the ICU admission were used to develop a mortality prediction score. LASSO (least absolute shrinkage and selection operator) and SGB (Stochastic Gradient Boosting) variable importance methodologies were used to select the set of variables that make up the developed score; each of this variables was dichotomized and a cut-off point that divides the population into two groups with different mean mortalities was found; if the patient is in the group that presents a higher mortality a one is assigned to the particular variable, otherwise a zero is assigned. These binary variables are used in a logistic regression (LR) model, and its coefficients were rounded to the nearest integer. The resulting integers are the point values that make up the score when multiplied with each binary variables and summed. The one-year mortality probability was estimated using the score as the only variable in a LR model. Predictive power of the score, was evaluated using the 1695 admissions of the validation subset obtaining an area under the receiver operating characteristic curve of 0.7528, which outperforms the results obtained with Sequential Organ Failure Assessment (SOFA), Oxford Acute Severity of Illness Score (OASIS) and Simplified Acute Physiology Score II (SAPSII) scores on the same validation subset. Observed and predicted mortality rates within estimated probabilities deciles were compared graphically and found to be similar, indicating that the risk estimate obtained with the score is close to the observed mortality, it is also observed that the number of events (deaths) is indeed increasing as the outcome go from the decile with the lowest probabilities to the decile with the highest probabilities. Sepsis is a syndrome that carries a high mortality, 43.3% for the patients included in this study; therefore, tools that help clinicians to quickly and accurately predict a worse prognosis are needed. This work demonstrates the importance of customization of mortality prediction scores since the developed score provides better performance than traditional scoring systems.Keywords: intensive care, logistic regression model, mortality prediction, sepsis, severity of illness, stochastic gradient boosting
Procedia PDF Downloads 222965 A Multimodal Dialogue Management System for Achieving Natural Interaction with Embodied Conversational Agents
Authors: Ozge Nilay Yalcin
Abstract:
Dialogue has been proposed to be the natural basis for the human-computer interaction, which is behaviorally rich and includes different modalities such as gestures, posture changes, gaze, para-linguistic parameters and linguistic context. However, equipping the system with these capabilities might have consequences on the usability of the system. One issue is to be able to find a good balance between rich behavior and fluent behavior, as planning and generating these behaviors is computationally expensive. In this work, we propose a multi-modal dialogue management system that automates the conversational flow from text-based dialogue examples and uses synchronized verbal and non-verbal conversational cues to achieve a fluent interaction. Our system is integrated with Smartbody behavior realizer to provide real-time interaction with embodied agent. The nonverbal behaviors are used according to turn-taking behavior, emotions, and personality of the user and linguistic analysis of the dialogue. The verbal behaviors are responsive to the emotional value of the utterance and the feedback from the user. Our system is aimed for online planning of these affective multi-modal components, in order to achieve enhanced user experience with richer and more natural interaction.Keywords: affect, embodied conversational agents, human-agent interaction, multimodal interaction, natural interfaces
Procedia PDF Downloads 176964 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture
Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán
Abstract:
Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing
Procedia PDF Downloads 93963 Exploring Safety Culture in Interventional Radiology: A Cross-Sectional Survey on Team Members' Attitudes
Authors: Anna Bjällmark, Victoria Persson, Bodil Karlsson, May Bazzi
Abstract:
Introduction: Interventional radiology (IR) is a continuously growing discipline that allows minimally invasive treatments of various medical conditions. The IR environment is, in several ways, comparable to the complex and accident-prone operation room (OR) environment. This implies that the IR environment may also be associated with various types of risks related to the work process and communication in the team. Patient safety is a central aspect of healthcare and involves the prevention and reduction of adverse events related to patient care. To maintain patient safety, it is crucial to build a safety culture where the staff are encouraged to report events and incidents that may have affected patient safety. It is also important to continuously evaluate the staff´s attitudes to patient safety. Despite the increasing number of IR procedures, research on the staff´s view regarding patients is lacking. Therefore, the main aim of the study was to describe and compare the IR team members' attitudes to patient safety. The secondary aim was to evaluate whether the WHO safety checklist was routinely used for IR procedures. Methods: An electronic survey was distributed to 25 interventional units in Sweden. The target population was the staff working in the IR team, i.e., physicians, radiographers, nurses, and assistant nurses. A modified version of the Safety Attitudes Questionnaire (SAQ) was used. Responses from 19 of 25 IR units (44 radiographers, 18 physicians, 5 assistant nurses, and 1 nurse) were received. The respondents rated their level of agreement for 27 items related to safety culture on a five-point Likert scale ranging from “Disagree strongly” to “Agree strongly.” Data were analyzed statistically using SPSS. The percentage of positive responses (PPR) was calculated by taking the percentage of respondents who got a scale score of 75 or higher. The respondents rated which corresponded to response options “Agree slightly” or “Agree strongly”. Thus, average scores ≥ 75% were classified as “positive” and average scores < 75% were classified as “non-positive”. Findings: The results indicated that the IR team had the highest factor scores and the highest percentages of positive responses in relation to job satisfaction (90/94%), followed by teamwork climate (85/92%). In contrast, stress recognition received the lowest ratings (54/25%). Attitudes related to these factors were relatively consistent between different professions, with only a few significant differences noted (Factor score: p=0.039 for job satisfaction, p=0.050 for working conditions. Percentage of positive responses: p=0.027 for perception of management). Radiographers tended to report slightly lower values compared to other professions for these factors (p<0.05). The respondents reported that the WHO safety checklist was not routinely used at their IR unit but acknowledged its importance for patient safety. Conclusion: This study reported high scores concerning job satisfaction and teamwork climate but lower scores concerning perception of management and stress recognition indicating that the latter are areas of improvement. Attitudes remained relatively consistent among the professions, but the radiographers reported slightly lower values in terms of job satisfaction and perception of the management. The WHO safety checklist was considered important for patient safety.Keywords: interventional radiology, patient safety, safety attitudes questionnaire, WHO safety checklist
Procedia PDF Downloads 63962 Optimization and Automation of Functional Testing with White-Box Testing Method
Authors: Reyhaneh Soltanshah, Hamid R. Zarandi
Abstract:
In order to be more efficient in industries that are related to computer systems, software testing is necessary despite spending time and money. In the embedded system software test, complete knowledge of the embedded system architecture is necessary to avoid significant costs and damages. Software tests increase the price of the final product. The aim of this article is to provide a method to reduce time and cost in tests based on program structure. First, a complete review of eleven white box test methods based on ISO/IEC/IEEE 29119 2015 and 2021 versions has been done. The proposed algorithm is designed using two versions of the 29119 standards, and some white-box testing methods that are expensive or have little coverage have been removed. On each of the functions, white box test methods were applied according to the 29119 standard and then the proposed algorithm was implemented on the functions. To speed up the implementation of the proposed method, the Unity framework has been used with some changes. Unity framework can be used in embedded software testing due to its open source and ability to implement white box test methods. The test items obtained from these two approaches were evaluated using a mathematical ratio, which in various software mining reduced between 50% and 80% of the test cost and reached the desired result with the minimum number of test items.Keywords: embedded software, reduce costs, software testing, white-box testing
Procedia PDF Downloads 55961 CFD Simulation Research on a Double Diffuser for Wind Turbines
Authors: Krzysztof Skiba, Zdzislaw Kaminski
Abstract:
Wind power is based on a variety of construction solutions to convert wind energy into electrical energy. These constructions are constrained by the correlation between their energy conversion efficiency and the area they occupy. Their energy conversion efficiency can be improved by wind tunnel tests of a rotor as a diffuser to optimize shapes of aerodynamic elements, to adapt these elements to changing conditions and to increase airflow intensity. This paper discusses the results of computer simulations and aerodynamic analyzes of this innovative diffuser design. The research aims at determining the aerodynamic phenomena triggered by the airflow inside this construction, and developing a design to improve the efficiency of the wind turbine. The research results enable us to design a diffuser with a double Venturi nozzle and specially shaped blades. The design of this type uses Bernoulli’s law on the behavior of the flowing medium in the tunnel of a decreasing diameter. The air flowing along the tunnel changes its velocity so the rotor inside such a decreased tunnel diameter rotates faster in this airflow than does the wind outside this tunnel, which makes the turbine more efficient. Additionally, airflow velocity is improved by applying aerodynamic rings with extended trailing edges to achieve controlled turbulent vortices.Keywords: wind turbine, renewable energy, cfd, numerical analysis
Procedia PDF Downloads 310960 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types
Authors: Qianxi Lv, Junying Liang
Abstract:
Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity
Procedia PDF Downloads 178959 Magnetic Solid-Phase Separation of Uranium from Aqueous Solution Using High Capacity Diethylenetriamine Tethered Magnetic Adsorbents
Authors: Amesh P, Suneesh A S, Venkatesan K A
Abstract:
The magnetic solid-phase extraction is a relatively new method among the other solid-phase extraction techniques for the separating of metal ions from aqueous solutions, such as mine water and groundwater, contaminated wastes, etc. However, the bare magnetic particles (Fe3O4) exhibit poor selectivity due to the absence of target-specific functional groups for sequestering the metal ions. The selectivity of these magnetic particles can be remarkably improved by covalently tethering the task-specific ligands on magnetic surfaces. The magnetic particles offer a number of advantages such as quick phase separation aided by the external magnetic field. As a result, the solid adsorbent can be prepared with the particle size ranging from a few micrometers to the nanometer, which again offers the advantages such as enhanced kinetics of extraction, higher extraction capacity, etc. Conventionally, the magnetite (Fe3O4) particles were prepared by the hydrolysis and co-precipitation of ferrous and ferric salts in aqueous ammonia solution. Since the covalent linking of task-specific functionalities on Fe3O4 was difficult, and it is also susceptible to redox reaction in the presence of acid or alkali, it is necessary to modify the surface of Fe3O4 by silica coating. This silica coating is usually carried out by hydrolysis and condensation of tetraethyl orthosilicate over the surface of magnetite to yield a thin layer of silica-coated magnetite particles. Since the silica-coated magnetite particles amenable for further surface modification, it can be reacted with task-specific functional groups to obtain the functionalized magnetic particles. The surface area exhibited by such magnetic particles usually falls in the range of 50 to 150 m2.g-1, which offer advantage such as quick phase separation, as compared to the other solid-phase extraction systems. In addition, the magnetic (Fe3O4) particles covalently linked on mesoporous silica matrix (MCM-41) and task-specific ligands offer further advantages in terms of extraction kinetics, high stability, longer reusable cycles, and metal extraction capacity, due to the large surface area, ample porosity and enhanced number of functional groups per unit area on these adsorbents. In view of this, the present paper deals with the synthesis of uranium specific diethylenetriamine ligand (DETA) ligand anchored on silica-coated magnetite (Fe-DETA) as well as on magnetic mesoporous silica (MCM-Fe-DETA) and studies on the extraction of uranium from aqueous solution spiked with uranium to mimic the mine water or groundwater contaminated with uranium. The synthesized solid-phase adsorbents were characterized by FT-IR, Raman, TG-DTA, XRD, and SEM. The extraction behavior of uranium on the solid-phase was studied under several conditions like the effect of pH, initial concentration of uranium, rate of extraction and its variation with pH and initial concentration of uranium, effect of interference ions like CO32-, Na+, Fe+2, Ni+2, and Cr+3, etc. The maximum extraction capacity of 233 mg.g-1 was obtained for Fe-DETA, and a huge capacity of 1047 mg.g-1 was obtained for MCM-Fe-DETA. The mechanism of extraction, speciation of uranium, extraction studies, reusability, and the other results obtained in the present study suggests Fe-DETA and MCM-Fe-DETA are the potential candidates for the extraction of uranium from mine water, and groundwater.Keywords: diethylenetriamine, magnetic mesoporous silica, magnetic solid-phase extraction, uranium extraction, wastewater treatment
Procedia PDF Downloads 168958 Groundwater Flow Assessment Based on Numerical Simulation at Omdurman Area, Khartoum State, Sudan
Authors: Adil Balla Elkrail
Abstract:
Visual MODFLOW computer codes were selected to simulate head distribution, calculate the groundwater budgets of the area, and evaluate the effect of external stresses on the groundwater head and to demonstrate how the groundwater model can be used as a comparative technique in order to optimize utilization of the groundwater resource. A conceptual model of the study area, aquifer parameters, boundary, and initial conditions were used to simulate the flow model. The trial-and-error technique was used to calibrate the model. The most important criteria used to check the calibrated model were Root Mean Square error (RMS), Mean Absolute error (AM), Normalized Root Mean Square error (NRMS) and mass balance. The maps of the simulated heads elaborated acceptable model calibration compared to observed heads map. A time length of eight years and the observed heads of the year 2004 were used for model prediction. The predictive simulation showed that the continuation of pumping will cause relatively high changes in head distribution and components of groundwater budget whereas, the low deficit computed (7122 m3/d) between inflows and outflows cannot create a significant drawdown of the potentiometric level. Hence, the area under consideration may represent a high permeability and productive zone and strongly recommended for further groundwater development.Keywords: aquifers, model simulation, groundwater, calibrations, trail-and- error, prediction
Procedia PDF Downloads 242957 In vitro Effects of Amygdalin on the Functional Competence of Rabbit Spermatozoa
Authors: Marek Halenár, Eva Tvrdá, Tomáš Slanina, Ľubomír Ondruška, Eduard Kolesár, Peter Massányi, Adriana Kolesárová
Abstract:
The present in vitro study was designed to reveal whether amygdalin (AMG) is able to cause changes to the motility, viability and mitochondrial activity of rabbit spermatozoa. New Zealand White rabbits (n = 10) aged four months were used in the study. Semen samples were collected from each animal and used for the in vitro incubation. The samples were divided into five equal parts and diluted with saline supplemented with 0, 0.5, 1, 2.5 and 5 mg/mL AMG. At times 0h, 3h and 5h spermatozoa motion parameters were assessed using the SpermVision™ computer-aided sperm analysis (CASA) system, cell viability was examined with the metabolic activity (MTT) assay, and the eosin-nigrosin staining technique was used to evaluate the viability of rabbit spermatozoa. All AMG concentrations exhibited stimulating effects on the spermatozoa activity, as shown by a significant preservation of the motility (P<0.05 with respect to 0.5 mg/mL and 1 mg/mL AMG; Time 5 h) and mitochondrial activity (P< 0.05 in case of 0.5 mg/mL AMG; P< 0.01 in case of 1 mg/mL AMG; P < 0.001 with respect to 2.5 mg/mL and 5 mg/mL AMG; Time 5 h). None of the AMG doses supplemented had any significant impact of the spermatozoa viability. In conclusion, the data revealed that short-term co-incubation of spermatozoa with AMG may result in a higher preservation of the sperm structural integrity and functional activity.Keywords: amygdalin, CASA, mitochondrial activity, motility, rabbits, spermatozoa, viability
Procedia PDF Downloads 330956 The Algorithm to Solve the Extend General Malfatti’s Problem in a Convex Circular Triangle
Authors: Ching-Shoei Chiang
Abstract:
The Malfatti’s Problem solves the problem of fitting 3 circles into a right triangle such that these 3 circles are tangent to each other, and each circle is also tangent to a pair of the triangle’s sides. This problem has been extended to any triangle (called general Malfatti’s Problem). Furthermore, the problem has been extended to have 1+2+…+n circles inside the triangle with special tangency properties among circles and triangle sides; we call it extended general Malfatti’s problem. In the extended general Malfatti’s problem, call it Tri(Tn), where Tn is the triangle number, there are closed-form solutions for Tri(T₁) (inscribed circle) problem and Tri(T₂) (3 Malfatti’s circles) problem. These problems become more complex when n is greater than 2. In solving Tri(Tn) problem, n>2, algorithms have been proposed to solve these problems numerically. With a similar idea, this paper proposed an algorithm to find the radii of circles with the same tangency properties. Instead of the boundary of the triangle being a straight line, we use a convex circular arc as the boundary and try to find Tn circles inside this convex circular triangle with the same tangency properties among circles and boundary Carc. We call these problems the Carc(Tn) problems. The CPU time it takes for Carc(T16) problem, which finds 136 circles inside a convex circular triangle with specified tangency properties, is less than one second.Keywords: circle packing, computer-aided geometric design, geometric constraint solver, Malfatti’s problem
Procedia PDF Downloads 110955 Semiautomatic Calculation of Ejection Fraction Using Echocardiographic Image Processing
Authors: Diana Pombo, Maria Loaiza, Mauricio Quijano, Alberto Cadena, Juan Pablo Tello
Abstract:
In this paper, we present a semi-automatic tool for calculating ejection fraction from an echocardiographic video signal which is derived from a database in DICOM format, of Clinica de la Costa - Barranquilla. Described in this paper are each of the steps and methods used to find the respective calculation that includes acquisition and formation of the test samples, processing and finally the calculation of the parameters to obtain the ejection fraction. Two imaging segmentation methods were compared following a methodological framework that is similar only in the initial stages of processing (process of filtering and image enhancement) and differ in the end when algorithms are implemented (Active Contour and Region Growing Algorithms). The results were compared with the measurements obtained by two different medical specialists in cardiology who calculated the ejection fraction of the study samples using the traditional method, which consists of drawing the region of interest directly from the computer using echocardiography equipment and a simple equation to calculate the desired value. The results showed that if the quality of video samples are good (i.e., after the pre-processing there is evidence of an improvement in the contrast), the values provided by the tool are substantially close to those reported by physicians; also the correlation between physicians does not vary significantly.Keywords: echocardiography, DICOM, processing, segmentation, EDV, ESV, ejection fraction
Procedia PDF Downloads 426954 An Integrated Modular Approach Based Simulation of Cold Heavy Oil Production
Authors: Hamidreza Sahaleh
Abstract:
In this paper, the authors display an incorporated secluded way to deal with quantitatively foresee volumetric sand generation and improved oil recuperation. This model is in light of blend hypothesis with erosion mechanics, in which multiphase hydrodynamics and geo-mechanics are coupled in a predictable way by means of principal unknowns, for example, saturation, pressure, porosity, and formation displacements. Foamy oil is demonstrated as a scattering of gas bubbles caught in the oil, where these gas air bubbles keep up a higher repository weight. A secluded methodology is then received to adequately exploit the current propelled standard supply and stress-strain codes. The model is actualized into three coordinated computational modules, i.e. erosion module, store module, and geo-mechanics module. The stress, stream and erosion mathematical statements are understood independently for every time addition, and the coupling terms (porosity, penetrability, plastic shear strain, and so on) are gone among them and iterated until certain union is accomplished on a period step premise. The framework is capable regarding its abilities, yet practical in terms of computer requirements and maintenance. Numerical results of field studies are displayed to show the capacities of the model. The impacts of foamy oil stream and sand generation are additionally inspected to exhibit their effect on the upgraded hydrocarbon recuperation.Keywords: oil recuperation, erosion mechanics, foamy oil, erosion module.
Procedia PDF Downloads 268953 An In-silico Pharmacophore-Based Anti-Viral Drug Development for Hepatitis C Virus
Authors: Romasa Qasim, G. M. Sayedur Rahman, Nahid Hasan, M. Shazzad Hosain
Abstract:
Millions of people worldwide suffer from Hepatitis C, one of the fatal diseases. Interferon (IFN) and ribavirin are the available treatments for patients with Hepatitis C, but these treatments have their own side-effects. Our research focused on the development of an orally taken small molecule drug targeting the proteins in Hepatitis C Virus (HCV), which has lesser side effects. Our current study aims to the Pharmacophore based drug development of a specific small molecule anti-viral drug for Hepatitis C Virus (HCV). Drug designing using lab experimentation is not only costly but also it takes a lot of time to conduct such experimentation. Instead in this in silico study, we have used computer-aided techniques to propose a Pharmacophore-based anti-viral drug specific for the protein domains of the polyprotein present in the Hepatitis C Virus. This study has used homology modeling and ab initio modeling for protein 3D structure generation followed by pocket identification in the proteins. Drug-able ligands for the pockets were designed using de novo drug design method. For ligand design, pocket geometry is taken into account. Out of several generated ligands, a new Pharmacophore is proposed, specific for each of the protein domains of HCV.Keywords: pharmacophore-based drug design, anti-viral drug, in-silico drug design, Hepatitis C virus (HCV)
Procedia PDF Downloads 271952 Development of Filling Material in 3D Printer with the Aid of Computer Software for Supported with Natural Zeolite for the Removal of Nitrogen and Phosphorus
Authors: Luís Fernando Cusioli, Leticia Nishi, Lucas Bairros, Gabriel Xavier Jorge, Sandro Rogério Lautenschalager, Celso Varutu Nakamura, Rosângela Bergamasco
Abstract:
Focusing on the elimination of nitrogen and phosphorus from sewage, the study proposes to face the challenges of eutrophication and to optimize the effectiveness of sewage treatment through biofilms and filling produced by a 3D printer, seeking to identify the most effective Polylactic Acid (PLA), Acrylonitrile Butadiene Styrene (ABS). The study also proposes to evaluate the nitrification process in a Submerged Aerated Biological Filter (FBAS) on a pilot plant scale, quantifying the removal of nitrogen and phosphorus. The experiment will consist of two distinct phases, namely, a bench stage and the implementation of a pilot plant. During the bench stage, samples will be collected at five points to characterize the microbiota. Samples will be collected, and the microbiota will be investigated using Fluorescence In Situ Hybridization (FISH), deepening the understanding of the performance of biofilms in the face of multiple variables. In this context, the study contributes to the search for effective solutions to mitigate eutrophication and, thus, strengthen initiatives to improve effluent treatment.Keywords: eutrophication, sewage treatment, biofilms, nitrogen and phosphorus removal, 3d printer, environmental efficiency
Procedia PDF Downloads 89951 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core
Authors: Yashas Bedre Raghavendra, Pim Vullers
Abstract:
This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction
Procedia PDF Downloads 70950 Lamb Waves Propagation in Elastic-Viscoelastic Three-Layer Adhesive Joints
Authors: Pezhman Taghipour Birgani, Mehdi Shekarzadeh
Abstract:
In this paper, the propagation of lamb waves in three-layer joints is investigated using global matrix method. Theoretical boundary value problem in three-layer adhesive joints with perfect bond and traction free boundary conditions on their outer surfaces is solved to find a combination of frequencies and modes with the lowest attenuation. The characteristic equation is derived by applying continuity and boundary conditions in three-layer joints using global matrix method. Attenuation and phase velocity dispersion curves are obtained with numerical solution of this equation by a computer code for a three-layer joint, including an aluminum repair patch bonded to the aircraft aluminum skin by a layer of viscoelastic epoxy adhesive. To validate the numerical solution results of the characteristic equation, wave structure curves are plotted for a special mode in two different frequencies in the adhesive joint. The purpose of present paper is to find a combination of frequencies and modes with minimum attenuation in high and low frequencies. These frequencies and modes are recognizable by transducers in inspections with Lamb waves because of low attenuation level.Keywords: three-layer adhesive joints, viscoelastic, lamb waves, global matrix method
Procedia PDF Downloads 393949 Electrochemical Activity of NiCo-GDC Cermet Anode for Solid Oxide Fuel Cells Operated in Methane
Authors: Kamolvara Sirisuksakulchai, Soamwadee Chaianansutcharit, Kazunori Sato
Abstract:
Solid Oxide Fuel Cells (SOFCs) have been considered as one of the most efficient large unit power generators for household and industrial applications. The efficiency of an electronic cell depends mainly on the electrochemical reactions in the anode. The development of anode materials has been intensely studied to achieve higher kinetic rates of redox reactions and lower internal resistance. Recent studies have introduced an efficient cermet (ceramic-metallic) material for its ability in fuel oxidation and oxide conduction. This could expand the reactive site, also known as the triple-phase boundary (TPB), thus increasing the overall performance. In this study, a bimetallic catalyst Ni₀.₇₅Co₀.₂₅Oₓ was combined with Gd₀.₁Ce₀.₉O₁.₉₅ (GDC) to be used as a cermet anode (NiCo-GDC) for an anode-supported type SOFC. The synthesis of Ni₀.₇₅Co₀.₂₅Oₓ was carried out by ball milling NiO and Co3O4 powders in ethanol and calcined at 1000 °C. The Gd₀.₁Ce₀.₉O₁.₉₅ was prepared by a urea co-precipitation method. Precursors of Gd(NO₃)₃·6H₂O and Ce(NO₃)₃·6H₂O were dissolved in distilled water with the addition of urea and were heated subsequently. The heated mixture product was filtered and rinsed thoroughly, then dried and calcined at 800 °C and 1500 °C, respectively. The two powders were combined followed by pelletization and sintering at 1100 °C to form an anode support layer. The fabrications of an electrolyte layer and cathode layer were conducted. The electrochemical performance in H₂ was measured from 800 °C to 600 °C while for CH₄ was from 750 °C to 600 °C. The maximum power density at 750 °C in H₂ was 13% higher than in CH₄. The difference in performance was due to higher polarization resistances confirmed by the impedance spectra. According to the standard enthalpy, the dissociation energy of C-H bonds in CH₄ is slightly higher than the H-H bond H₂. The dissociation of CH₄ could be the cause of resistance within the anode material. The results from lower temperatures showed a descending trend of power density in relevance to the increased polarization resistance. This was due to lowering conductivity when the temperature decreases. The long-term stability was measured at 750 °C in CH₄ monitoring at 12-hour intervals. The maximum power density tends to increase gradually with time while the resistances were maintained. This suggests the enhanced stability from charge transfer activities in doped ceria due to the transition of Ce⁴⁺ ↔ Ce³⁺ at low oxygen partial pressure and high-temperature atmosphere. However, the power density started to drop after 60 h, and the cell potential also dropped from 0.3249 V to 0.2850 V. These phenomena was confirmed by a shifted impedance spectra indicating a higher ohmic resistance. The observation by FESEM and EDX-mapping suggests the degradation due to mass transport of ions in the electrolyte while the anode microstructure was still maintained. In summary, the electrochemical test and stability test for 60 h was achieved by NiCo-GDC cermet anode. Coke deposition was not detected after operation in CH₄, hence this confirms the superior properties of the bimetallic cermet anode over typical Ni-GDC.Keywords: bimetallic catalyst, ceria-based SOFCs, methane oxidation, solid oxide fuel cell
Procedia PDF Downloads 154948 Design and Implementation of a Counting and Differentiation System for Vehicles through Video Processing
Authors: Derlis Gregor, Kevin Cikel, Mario Arzamendia, Raúl Gregor
Abstract:
This paper presents a self-sustaining mobile system for counting and classification of vehicles through processing video. It proposes a counting and classification algorithm divided in four steps that can be executed multiple times in parallel in a SBC (Single Board Computer), like the Raspberry Pi 2, in such a way that it can be implemented in real time. The first step of the proposed algorithm limits the zone of the image that it will be processed. The second step performs the detection of the mobile objects using a BGS (Background Subtraction) algorithm based on the GMM (Gaussian Mixture Model), as well as a shadow removal algorithm using physical-based features, followed by morphological operations. In the first step the vehicle detection will be performed by using edge detection algorithms and the vehicle following through Kalman filters. The last step of the proposed algorithm registers the vehicle passing and performs their classification according to their areas. An auto-sustainable system is proposed, powered by batteries and photovoltaic solar panels, and the data transmission is done through GPRS (General Packet Radio Service)eliminating the need of using external cable, which will facilitate it deployment and translation to any location where it could operate. The self-sustaining trailer will allow the counting and classification of vehicles in specific zones with difficult access.Keywords: intelligent transportation system, object detection, vehicle couting, vehicle classification, video processing
Procedia PDF Downloads 322947 MARTI and MRSD: Newly Developed Isolation-Damping Devices with Adaptive Hardening for Seismic Protection of Structures
Authors: Murast Dicleli, Ali SalemMilani
Abstract:
In this paper, a summary of analytical and experimental studies into the behavior of a new hysteretic damper, designed for seismic protection of structures is presented. The Multi-directional Torsional Hysteretic Damper (MRSD) is a patented invention in which a symmetrical arrangement of identical cylindrical steel cores is so configured as to yield in torsion while the structure experiences planar movements due to earthquake shakings. The new device has certain desirable properties. Notably, it is characterized by a variable and controllable-via-design post-elastic stiffness. The mentioned property is a result of MRSD’s kinematic configuration which produces this geometric hardening, rather than being a secondary large-displacement effect. Additionally, the new system is capable of reaching high force and displacement capacities, shows high levels of damping, and very stable cyclic response. The device has gone through many stages of design refinement, multiple prototype verification tests and development of design guide-lines and computer codes to facilitate its implementation in practice. Practicality of the new device, as offspring of an academic sphere, is assured through extensive collaboration with industry in its final design stages, prototyping and verification test programs.Keywords: seismic, isolation, damper, adaptive stiffness
Procedia PDF Downloads 456946 A Comparative Study on the Performance of Viscous and Friction Dampers under Seismic Excitation
Authors: Apetsi K. Ampiah, Zhao Xin
Abstract:
Earthquakes over the years have been known to cause devastating damage on buildings and induced huge loss on human life and properties. It is for this reason that engineers have devised means of protecting buildings and thus protecting human life. Since the invention of devices such as the viscous and friction dampers, scientists/researchers have been able to incorporate these devices into buildings and other engineering structures. The viscous damper is a hydraulic device which dissipates the seismic forces by pushing fluid through an orifice, producing a damping pressure which creates a force. In the friction damper, the force is mainly resisted by converting the kinetic energy into heat by friction. Devices such as viscous and friction dampers are able to absorb almost all the earthquake energy, allowing the structure to remain undamaged (or with some amount of damage) and ready for immediate reuse (with some repair works). Comparing these two devices presents the engineer with adequate information on the merits and demerits of these devices and in which circumstances their use would be highly favorable. This paper examines the performance of both viscous and friction dampers under different ground motions. A two-storey frame installed with both devices under investigation are modeled in commercial computer software and analyzed under different ground motions. The results of the performance of the structure are then tabulated and compared. Also included in this study is the ease of installation and maintenance of these devices.Keywords: friction damper, seismic, slip load, viscous damper
Procedia PDF Downloads 168