Search results for: records
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 208

Search results for: records

58 Effect of Fill Material Density under Structures on Ground Motion Characteristics Due to Earthquake

Authors: Ahmed T. Farid, Khaled Z. Soliman

Abstract:

Due to limited areas and excessive cost of land for projects, backfilling process has become necessary. Also, backfilling will be done to overcome the un-leveling depths or raising levels of site construction, especially near the sea region. Therefore, backfilling soil materials used under the foundation of structures should be investigated regarding its effect on ground motion characteristics, especially at regions subjected to earthquakes. In this research, 60-meter thickness of sandy fill material was used above a fixed 240-meter of natural clayey soil underlying by rock formation to predict the modified ground motion characteristics effect at the foundation level. Comparison between the effect of using three different situations of fill material compaction on the recorded earthquake is studied, i.e. peak ground acceleration, time history, and spectra acceleration values. The three different densities of the compacted fill material used in the study were very loose, medium dense and very dense sand deposits, respectively. Shake computer program was used to perform this study. Strong earthquake records, with Peak Ground Acceleration (PGA) of 0.35 g, were used in the analysis. It was found that, higher compaction of fill material thickness has a significant effect on eliminating the earthquake ground motion properties at surface layer of fill material, near foundation level. It is recommended to consider the fill material characteristics in the design of foundations subjected to seismic motions. Future studies should be analyzed for different fill and natural soil deposits for different seismic conditions.

Keywords: Fill, material, density, compaction, earthquake, PGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 883
57 Challenging the Stereotypes: A Critical Study of Chotti Munda and His Arrow and Sula

Authors: Khushboo Gokani, Renu Josan

Abstract:

Mahasweta Devi and Toni Morrison are the two stalwarts of the Indian English and the Afro-American literature respectively. The writings of these two novelists are authentic and powerful records of the lives of the people because much of their personal experiences have gone into the making of their works. Devi, a representative force of the Indian English literature, is also a social activist working with the tribals of Bihar, Jharkhand, Orissa and West Bengal. Most of her works echo the lives and struggles of the subalterns as is evident in her “best beloved book” Chotti Munda and His Arrow. The novelist focuses on the struggle of the tribals against the colonial and the feudal powers to create their own identity, thereby, embarking on the ideological project of ‘setting the record straight’. The Nobel Laureate Toni Morrison, on the other hand, brings to the fore the crucial issues of gender, race and class in many of her significant works. In one of her representative works Sula, the protagonist emerges as a non- conformist and directly confronts the notion of a ‘good woman’ nurtured by the community of the Blacks. In addition to this, the struggle of the Blacks against the White domination, also become an important theme of the text. The thrust of the paper lies in making a critical analysis of the portrayal of the heroic attempts of the subaltern protagonist and the artistic endeavor of the novelists in challenging the stereotypes.

Keywords: Subaltern, The Centre And The Periphery, Struggle Of The Muted Groups.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3695
56 Web Content Mining: A Solution to Consumer's Product Hunt

Authors: Syed Salman Ahmed, Zahid Halim, Rauf Baig, Shariq Bashir

Abstract:

With the rapid growth in business size, today's businesses orient towards electronic technologies. Amazon.com and e-bay.com are some of the major stakeholders in this regard. Unfortunately the enormous size and hugely unstructured data on the web, even for a single commodity, has become a cause of ambiguity for consumers. Extracting valuable information from such an everincreasing data is an extremely tedious task and is fast becoming critical towards the success of businesses. Web content mining can play a major role in solving these issues. It involves using efficient algorithmic techniques to search and retrieve the desired information from a seemingly impossible to search unstructured data on the Internet. Application of web content mining can be very encouraging in the areas of Customer Relations Modeling, billing records, logistics investigations, product cataloguing and quality management. In this paper we present a review of some very interesting, efficient yet implementable techniques from the field of web content mining and study their impact in the area specific to business user needs focusing both on the customer as well as the producer. The techniques we would be reviewing include, mining by developing a knowledge-base repository of the domain, iterative refinement of user queries for personalized search, using a graphbased approach for the development of a web-crawler and filtering information for personalized search using website captions. These techniques have been analyzed and compared on the basis of their execution time and relevance of the result they produced against a particular search.

Keywords: Data mining, web mining, search engines, knowledge discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2052
55 Necrotising Anterior Scleritis and Scleroderma: A Rare Association

Authors: A. Vassila, D. Kalogeropoulos, R. Rawashdeh, N. Hall, N. Rahman, M. Fabian, S. Thulasidharan, H. Parwez

Abstract:

Necrotising scleritis is a severe form of scleritis and poses a significant threat to vision. It can manifest in various systemic autoimmune disorders, systemic vasculitis, or as a consequence of microbial infections. The objective of this study is to present a case of necrotizing scleritis associated with scleroderma, which was further complicated by a secondary Staphylococcus epidermidis infection. This is a retrospective analysis, which examines the medical records of a patient who was hospitalised in the Eye Unit at University Hospital Southampton. A 78-year-old woman presented at the eye casualty department of our unit with a two-week history of progressively worsening pain in her left eye. She received a diagnosis of necrotising scleritis and was admitted to the hospital for further treatment. It was decided to commence a three-day course of intravenous methylprednisolone followed by a tapering regimen of oral steroids. Additionally, a conjunctival swab was taken, and two days later, it revealed the presence of S. epidermidis, indicating a potential secondary infection. Given this finding, she was also prescribed topical (Ofloxacin 0.3% - four times daily) and oral (Ciprofloxacin 750 mg – twice daily) antibiotics. The inflammation and symptoms gradually improved, leading to the patient being scheduled for a scleral graft and applying an amniotic membrane to cover the area of scleral thinning. Rheumatoid arthritis and granulomatosis with polyangiitis are the most commonly identifiable systemic diseases associated with necrotising scleritis. Although association with scleroderma is extremely rare, early identification and treatment are necessary to prevent scleritis-related complications.

Keywords: Scleritis, necrotizing scleritis, scleroderma, autoimmune disease.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29
54 Developing Electronic Medical Record System to Enhance the Satisfaction of Patients and Service Providers

Authors: Siham Jemal Kedir

Abstract:

Information communication technology is dramatically transforming the health sector, especially in developing countries with few resources and burgeoning access to an internet connection. As a result, processes such as record keeping, administration, and human resources have been vastly simplified, allowing hospitals to focus on delivering urgent medical care. This paper will explore the impact of IT through a study of the electronic medical record system in the Mekelle City Health Center in Tigray Region, Ethiopia. This paper has four specific objectives: 1. developing artifacts in the Electronic Medical Record system, 2. preparing a diagram for step-by-step development of Electronic Medical Records, 3. creating a draft website with the proposed Electronic Medical Record system, and 4. Testing and evaluating the performance and user acceptance of the system. The research will be done in a qualitative manner employing interviews and in-person observation. The research has found the following major results: firstly, the medical record system has been difficult to implement. Second, the Mekelle Health Center is using a manual recording system which is time-consuming and inefficient. The old recording system in the Center leads to the dissatisfaction of patients as well as the service provider staff. As a result, to transform the manual recording system into a digital system, an electronic medical recording system has been developed. The developed system has been tested for implementation and has been successful. Consequently, the administrator of the health center is ready to implement and use the developed software to introduce a medical recording system in Mekelle Health Center.

Keywords: Electronic Health Record Implementation, EMR System Development, Medical Record.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 62
53 Bayes Net Classifiers for Prediction of Renal Graft Status and Survival Period

Authors: Jiakai Li, Gursel Serpen, Steven Selman, Matt Franchetti, Mike Riesen, Cynthia Schneider

Abstract:

This paper presents the development of a Bayesian belief network classifier for prediction of graft status and survival period in renal transplantation using the patient profile information prior to the transplantation. The objective was to explore feasibility of developing a decision making tool for identifying the most suitable recipient among the candidate pool members. The dataset was compiled from the University of Toledo Medical Center Hospital patients as reported to the United Network Organ Sharing, and had 1228 patient records for the period covering 1987 through 2009. The Bayes net classifiers were developed using the Weka machine learning software workbench. Two separate classifiers were induced from the data set, one to predict the status of the graft as either failed or living, and a second classifier to predict the graft survival period. The classifier for graft status prediction performed very well with a prediction accuracy of 97.8% and true positive values of 0.967 and 0.988 for the living and failed classes, respectively. The second classifier to predict the graft survival period yielded a prediction accuracy of 68.2% and a true positive rate of 0.85 for the class representing those instances with kidneys failing during the first year following transplantation. Simulation results indicated that it is feasible to develop a successful Bayesian belief network classifier for prediction of graft status, but not the graft survival period, using the information in UNOS database.

Keywords: Bayesian network classifier, renal transplantation, graft survival period, United Network for Organ Sharing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2108
52 Hacking the Spatial Limitations in Bridging Virtual and Traditional Teaching Methodologies in Sri Lanka

Authors: Manuela Nayantara Jeyaraj

Abstract:

Having moved into the 21st century, it is way past being arguable that innovative technology needs to be incorporated into conventional classroom teaching. Though the Western world has found presumable success in achieving this, it is still a concept under battle in developing countries such as Sri Lanka. Reaching the acme of implementing interactive virtual learning within classrooms is a struggling idealistic fascination within the island. In order to overcome this problem, this study is set to reveal facts that limit the implementation of virtual, interactive learning within the school classrooms and provide hacks that could prove the augmented use of the Virtual World to enhance teaching and learning experiences. As each classroom moves along with the usage of technology to fulfill its functionalities, a few intense hacks provided will build the administrative onuses on a virtual system. These hacks may divulge barriers based on social conventions, financial boundaries, digital literacy, intellectual capacity of the staff, and highlight the impediments in introducing students to an interactive virtual learning environment and thereby provide the necessary actions or changes to be made to succeed and march along in creating an intellectual society built on virtual learning and lifestyle. This digital learning environment will be composed of multimedia presentations, trivia and pop quizzes conducted on a GUI, assessments conducted via a virtual system, records maintained on a database, etc. The ultimate objective of this study could enhance every child's basic learning environment; hence, diminishing the digital divide that exists in certain communities.

Keywords: Digital divide, digital learning, digitization, Sri Lanka, teaching methodologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1201
51 An Exploratory Case Study of the Interference of Erotic Transference in the Longevity of Psychoanalytic Treatment

Authors: M. Javid, R. Hassan, J. DeSilva

Abstract:

In this exploratory case study, a 37-year-old male patient who previously terminated treatment after four months of therapy with a different therapist begins anew with a 38-year-old female therapist and undergoes a similar cycle of premature termination, with added discourse caused by erotic transference. Process notes and records of the therapy treatment indicate that during the short course of treatment, the patient explored his difficulties navigating personal relationships, both current and past, and his difficulties coping with hypochondriasis. The therapist becomes tasked with not only navigating the patient’s inner conflict but also how she relates to the patient in the countertransference process while maintaining professional boundaries. This includes empathizing with the patient while also experiencing discomfort in the erotic transference from a professional standpoint. When the patient terminates once more, the therapist reflects on the possible reasons for termination. This includes the patient’s difficulties with tolerating interpretations, which cause him to blame himself for past events. These interpretations were also very frequent, contributing to the emotional burden the patient experienced. The therapist reflected on the use of interpretation versus exploration of the patient’s feelings and how exploring his feelings, including his feelings towards her, would have allowed for an opportunity to explore the emotions that troubled him more deeply. This includes exploring the patient’s anger and fear, which stem from unresolved conflicts from his childhood. Moreover, the erotic transference served as an enactment of previous experiences in which the patient feared losing what he loved, leading him to opt for premature termination instead of losing his ability to control the relationship and experience loss.

Keywords: Countertransference, erotic transference, premature termination, therapist-client boundaries, transference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81
50 A Three Elements Vector Valued Structure’s Ultimate Strength-Strong Motion-Intensity Measure

Authors: A. Nicknam, N. Eftekhari, A. Mazarei, M. Ganjvar

Abstract:

This article presents an alternative collapse capacity intensity measure in the three elements form which is influenced by the spectral ordinates at periods longer than that of the first mode period at near and far source sites. A parameter, denoted by β, is defined by which the spectral ordinate effects, up to the effective period (2T1), on the intensity measure are taken into account. The methodology permits to meet the hazard-levelled target extreme event in the probabilistic and deterministic forms. A MATLAB code is developed involving OpenSees to calculate the collapse capacities of the 8 archetype RC structures having 2 to 20 stories for regression process. The incremental dynamic analysis (IDA) method is used to calculate the structure’s collapse values accounting for the element stiffness and strength deterioration. The general near field set presented by FEMA is used in a series of performing nonlinear analyses. 8 linear relationships are developed for the 8structutres leading to the correlation coefficient up to 0.93. A collapse capacity near field prediction equation is developed taking into account the results of regression processes obtained from the 8 structures. The proposed prediction equation is validated against a set of actual near field records leading to a good agreement. Implementation of the proposed equation to the four archetype RC structures demonstrated different collapse capacities at near field site compared to those of FEMA. The reasons of differences are believed to be due to accounting for the spectral shape effects.

Keywords: Collapse capacity, fragility analysis, spectral shape effects, IDA method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1794
49 Development of a Wall Climbing Robotic Ground Penetrating Radar System for Inspection of Vertical Concrete Structures

Authors: Md Omar Faruq Howlader, Tariq Pervez Sattar, Sandra Dudley

Abstract:

This paper describes the design process of a 200 MHz Ground Penetrating Radar (GPR) and a battery powered concrete vertical concrete surface climbing mobile robot. The key design feature is a miniaturized 200 MHz dipole antenna using additional radiating arms and procedure records a reduction of 40% in length compared to a conventional antenna. The antenna set is mounted in front of the robot using a servo mechanism for folding and unfolding purposes. The robot’s adhesion mechanism to climb the reinforced concrete wall is based on neodymium permanent magnets arranged in a unique combination to concentrate and maximize the magnetic flux to provide sufficient adhesion force for GPR installation. The experiments demonstrated the robot’s capability of climbing reinforced concrete wall carrying the attached prototype GPR system and perform floor-to-wall transition and vice versa. The developed GPR’s performance is validated by its capability of detecting and localizing an aluminium sheet and a reinforcement bar (rebar) of 12 mm diameter buried under a test rig built of wood to mimic the concrete structure environment. The present robotic GPR system proves the concept of feasibility of undertaking inspection procedure on large concrete structures in hazardous environments that may not be accessible to human inspectors.

Keywords: Climbing robot, dipole antenna, Ground Penetrating Radar (GPR), mobile robots, robotic GPR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451
48 Degradation of Heating, Ventilation, and Air Conditioning Components across Locations

Authors: Timothy E. Frank, Josh R. Aldred, Sophie B. Boulware, Michelle K. Cabonce, Justin H. White

Abstract:

Materials degrade at different rates in different environments depending on factors such as temperature, aridity, salinity, and solar radiation. Therefore, predicting asset longevity depends, in part, on the environmental conditions to which the asset is exposed. Heating, ventilation, and air conditioning (HVAC) systems are critical to building operations yet are responsible for a significant proportion of their energy consumption. HVAC energy use increases substantially with slight operational inefficiencies. Understanding the environmental influences on HVAC degradation in detail will inform maintenance schedules and capital investment, reduce energy use, and increase lifecycle management efficiency. HVAC inspection records spanning 14 years from 21 locations across the United States were compiled and associated with the climate conditions to which they were exposed. Three environmental features were explored in this study: average high temperature, average low temperature, and annual precipitation, as well as four non-environmental features. Initial insights showed no correlations between individual features and the rate of HVAC component degradation. Using neighborhood component analysis, however, the most critical features related to degradation were identified. Two models were considered, and results varied between them. However, longitude and latitude emerged as potentially the best predictors of average HVAC component degradation. Further research is needed to evaluate additional environmental features, increase the resolution of the environmental data, and develop more robust models to achieve more conclusive results.

Keywords: Climate, infrastructure degradation, HVAC, neighborhood component analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 172
47 Detecting Fake News: A Natural Language Processing, Reinforcement Learning, and Blockchain Approach

Authors: Ashly Joseph, Jithu Paulose

Abstract:

In an era where misleading information may quickly circulate on digital news channels, it is crucial to have efficient and trustworthy methods to detect and reduce the impact of misinformation. This research proposes an innovative framework that combines Natural Language Processing (NLP), Reinforcement Learning (RL), and Blockchain technologies to precisely detect and minimize the spread of false information in news articles on social media. The framework starts by gathering a variety of news items from different social media sites and performing preprocessing on the data to ensure its quality and uniformity. NLP methods are utilized to extract complete linguistic and semantic characteristics, effectively capturing the subtleties and contextual aspects of the language used. These features are utilized as input for a RL model. This model acquires the most effective tactics for detecting and mitigating the impact of false material by modeling the intricate dynamics of user engagements and incentives on social media platforms. The integration of blockchain technology establishes a decentralized and transparent method for storing and verifying the accuracy of information. The Blockchain component guarantees the unchangeability and safety of verified news records, while encouraging user engagement for detecting and fighting false information through an incentive system based on tokens. The suggested framework seeks to provide a thorough and resilient solution to the problems presented by misinformation in social media articles.

Keywords: Natural Language Processing, Reinforcement Learning, Blockchain, fake news mitigation, misinformation detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 84
46 The Results of the Fetal Weight Estimation of the Infants Delivered in the Delivery Room At Dan Khunthot Hospital by Johnson-s Method

Authors: Nareelux Suwannobol, JintanaTapin, Khuanchanok Narachan

Abstract:

The objective of this study was to determine the accuracy to estimation fetal weight by Johnson-s method and compares it with actual birth weight. The sample group was 126 infants delivered in Dan KhunThot hospital from January March 2012. Fetal weight was estimated by measuring fundal height according to Johnson-s method. The information was collected by studying historical delivery records and then analyzed by using the statistics of frequency, percentage, mean, and standard deviation. Finally, the difference was analyzed by a paired t-test.The results showed had an average birth weight was 3093.57 ± 391.03 g (mean ± SD) and 3,455 ± 454.55 g average estimated fetal weight by Johnson-s method higher than average actual birth weight was 384.09 grams. When classifying the infants according to birth weight found that low birth weight (<2500 g) and the appropriate birth weight (2500-3999g) actual birth weight less than estimate fetal weight . But the high birth weight (> 4000 g) actual birth weight was more than estimated fetal weight. The difference was found between actual birth weight and estimation fetal weight of the minimum weight in high birth weight ( > 4000 g) , the appropriate birth weight (2500-3999g) and low birth weight (<2500 g) respectively. The rate of estimates fetal weight within 10% of actual birth weight was 35.7%. Actual birth weight were compared with the found that the difference is statistically significant (p <.000). Employing Johnson-s method to estimate fetal weight can estimate initial fetal weight before passing to special examinations, which may require excessive high cost. A variety of methods should be employed to estimate fetal weight more precisely, which will help plan care for mother-s and infant-s safety.

Keywords: Johnson's method, Fetal weight estimate, Delivery Room, Student nurse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2345
45 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing

Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca De Marchi

Abstract:

This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes a larger monitored area available. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary, the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.

Keywords: Data compression, ultrasonic communication, guided waves, FEM analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 378
44 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 449
43 Traumatic Ankle Pain: Adequacy of Clinical Information in X-Ray Request with Reference to the Ottawa Ankle Rule

Authors: Rania Mustafa

Abstract:

This audit was conducted at Manchester University NHS Foundation Trust, Wythenshawe Hospital Radiology and Accident and Emergency [A&E] Department to assess the appropriateness of clinical information in X-ray requests, specifically in cases of acute ankle injuries. As per the Ottawa Ankle Rules and the recommendations of National Institute for Health and Care Excellence [NICE] and the Royal College of Radiology, we aimed to evaluate the appropriateness of referrals and the thoroughness of clinical information provided by Emergency Department [ED] clinicians for ankle radiography. Our goal was to achieve 100% compliance with these guidelines. The audit involved a comprehensive analysis spanning the period from August 2022 to January 2023, encompassing patient records, radiographic orders, and clinical assessments. Data collection included patient demographics, presenting complaints, clinical assessments, adherence to Ottawa Ankle Rules criteria, and subsequent radiography orders. Here we conducted two audit cycles, involving 38 patients in the first cycle and 86 patients in the second cycle. The data were furtherly filtered to include all patients who were referred from the ED for an ankle Xray with a history of acute trauma and age of more than 18 years. The key finding was that in August 2022, 60% of cases met the Ottawa Ankle Rules criteria accurately, indicating a need for improvement in adherence. However, by January 2023, there was a notable improvement, with 95% of cases accurately meeting the criteria. This significant change reflects an increased alignment with best practices for ankle radiography referrals.

Keywords: Ankle, injuries, Ottawa Ankle Rule, X-rays.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 288
42 Improvement of Overall Equipment Effectiveness through Total Productive Maintenance

Authors: S. Fore, L. Zuze

Abstract:

Frequent machine breakdowns, low plant availability and increased overtime are a great threat to a manufacturing plant as they increase operating costs of an industry. The main aim of this study was to improve Overall Equipment Effectiveness (OEE) at a manufacturing company through the implementation of innovative maintenance strategies. A case study approach was used. The paper focuses on improving the maintenance in a manufacturing set up using an innovative maintenance regime mix to improve overall equipment effectiveness. Interviews, reviewing documentation and historical records, direct and participatory observation were used as data collection methods during the research. Usually production is based on the total kilowatt of motors produced per day. The target kilowatt at 91% availability is 75 Kilowatts a day. Reduced demand and lack of raw materials particularly imported items are adversely affecting the manufacturing operations. The company had to reset its targets from the usual figure of 250 Kilowatt per day to mere 75 per day due to lower availability of machines as result of breakdowns as well as lack of raw materials. The price reductions and uncertainties as well as general machine breakdowns further lowered production. Some recommendations were given. For instance, employee empowerment in the company will enhance responsibility and authority to improve and totally eliminate the six big losses. If the maintenance department is to realise its proper function in a progressive, innovative industrial society, then its personnel must be continuously trained to meet current needs as well as future requirements. To make the maintenance planning system effective, it is essential to keep track of all the corrective maintenance jobs and preventive maintenance inspections. For large processing plants these cannot be handled manually. It was therefore recommended that the company implement (Computerised Maintenance Management System) CMMS.

Keywords: Maintenance, Manufacturing, Overall Equipment Effectiveness

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3987
41 Seismic Vulnerability of Structures Designed in Accordance with the Allowable Stress Design and Load Resistant Factor Design Methods

Authors: Mohammadreza Vafaei, Amirali Moradi, Sophia C. Alih

Abstract:

The method selected for the design of structures not only can affect their seismic vulnerability but also can affect their construction cost. For the design of steel structures, two distinct methods have been introduced by existing codes, namely allowable stress design (ASD) and load resistant factor design (LRFD). This study investigates the effect of using the aforementioned design methods on the seismic vulnerability and construction cost of steel structures. Specifically, a 20-story building equipped with special moment resisting frame and an eccentrically braced system was selected for this study. The building was designed for three different intensities of peak ground acceleration including 0.2 g, 0.25 g, and 0.3 g using the ASD and LRFD methods. The required sizes of beams, columns, and braces were obtained using response spectrum analysis. Then, the designed frames were subjected to nine natural earthquake records which were scaled to the designed response spectrum. For each frame, the base shear, story shears, and inter-story drifts were calculated and then were compared. Results indicated that the LRFD method led to a more economical design for the frames. In addition, the LRFD method resulted in lower base shears and larger inter-story drifts when compared with the ASD method. It was concluded that the application of the LRFD method not only reduced the weights of structural elements but also provided a higher safety margin against seismic actions when compared with the ASD method.

Keywords: Allowable stress design, load resistant factor design, nonlinear time history analysis, seismic vulnerability, steel structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1109
40 Comparative Usability Study of the Websites of Top Universities in Three Continents: A Case Study of the University of Cape Town, Oxford University, and Harvard University

Authors: Stephen Akuma, Racheal Aluma, Abraham Undu

Abstract:

Academic websites play an important role in promoting education for all. They allow universities to provide users with digital academic services to save time and resources. A university website is not only a cost-effective and timely way to communicate with a variety of stakeholders, such as students, faculty, and visitors, but it is also a vehicle for the university to shape its image. The quality of a website is a major factor that universities consider in cyberspace. Potential students can easily apply to universities where the website provides useful and clear information. This has made the usability of websites an important area in meeting the needs and expectations of website users. In this paper, a comparative usability study of the University of Cape Town, Oxford University, and Harvard University academic websites (http://www.uct.ac.za/, https://www.ox.ac.uk/, and https://www.harvard.edu/) was carried out. The proactive user feedback technique was adopted for the comparative usability assessment of the aforementioned universities. The method was used by the researchers to collect and log records from the participants in real time. The result shows that the average dwell time on the websites of Harvard University, Oxford University, and Cape Town University in seconds for the three tasks are 51.58, 33.28, and 54.82 respectively. The System Usability Scale (SUS) scores for Harvard, Oxford, and the University of Cape Town are 49.81, 69.43, and 54.14 respectively. The result of the Analysis of Variance on the dwell time data shows a significant difference (p = .009) on the three websites. Our findings show that Oxford University has the most suitable website in terms of usability factors and other metrics than the other websites investigated. Practical implications are highlighted, and recommendations for improved website usability are suggested.

Keywords: Usability factors, user feedback, university websites, University of Cape Town, Harvard University, Oxford University.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 159
39 Influence of Environment-Friendly Organic Wastes on the Properties of Sandy Soil under Growing Zea mays L. in Arid Regions

Authors: Mohamed Rashad, Mohamed Hafez, Mohamed Emran, Emad Aboukila, Ibrahim Nassar

Abstract:

Environment-friendly organic wastes of Brewers' spent grain, a byproduct of the brewing process, have recently used as soil amendment to improve soil fertility and plant production. In this work, treatments of 1% (T1) and 2% (T2) of spent grains, 1% (C1) and 2% (C2) of compost and mix of both sources (C1T1) were used and compared to the control for growing Zea mays L. on sandy soil under arid Mediterranean climate. Soils were previously incubated at 65% saturation capacity for a month. The most relevant soil physical and chemical parameters were analysed. Water holding capacity and soil organic matter (OM) increased significantly along the treatments with the highest values in T2. Soil pH decreased along the treatments and the lowest pH was in C1T1. Bicarbonate decreased by 69% in C1T1 comparing to control. Total nitrogen (TN) and available P varied significantly among all treatments and T2, C1T1 and C2 treatments increased 25, 17 and 11 folds in TN and 1.2, 0.6 and 0.3 folds in P, respectively related to control. Available K showed the highest values in C1T1. Soil micronutrients increased significantly along all treatments with the highest values in T2. After corn germination, significant variation was observed in the velocity of germination coefficients (VGC) among all treatments in the order of C1T1>T2>T1>C2>C1>control. The highest records of final germination and germination index were in C1T1 and T2. The spent grains may compensate deficiencies of macro and micronutrients in newly reclaimed sandy soils without adverse effects to sustain crop production with a rider that excessive or continuous use need to be circumvented.

Keywords: Spent grain, compost, micronutrients, macronutrients, water holding capacity, plant growth.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1138
38 Influence of Local Soil Conditions on Optimal Load Factors for Seismic Design of Buildings

Authors: Miguel A. Orellana, Sonia E. Ruiz, Juan Bojórquez

Abstract:

Optimal load factors (dead, live and seismic) used for the design of buildings may be different, depending of the seismic ground motion characteristics to which they are subjected, which are closely related to the type of soil conditions where the structures are located. The influence of the type of soil on those load factors, is analyzed in the present study. A methodology that is useful for establishing optimal load factors that minimize the cost over the life cycle of the structure is employed; and as a restriction, it is established that the probability of structural failure must be less than or equal to a prescribed value. The life-cycle cost model used here includes different types of costs. The optimization methodology is applied to two groups of reinforced concrete buildings. One set (consisting on 4-, 7-, and 10-story buildings) is located on firm ground (with a dominant period Ts=0.5 s) and the other (consisting on 6-, 12-, and 16-story buildings) on soft soil (Ts=1.5 s) of Mexico City. Each group of buildings is designed using different combinations of load factors. The statistics of the maximums inter-story drifts (associated with the structural capacity) are found by means of incremental dynamic analyses. The buildings located on firm zone are analyzed under the action of 10 strong seismic records, and those on soft zone, under 13 strong ground motions. All the motions correspond to seismic subduction events with magnitudes M=6.9. Then, the structural damage and the expected total costs, corresponding to each group of buildings, are estimated. It is concluded that the optimal load factors combination is different for the design of buildings located on firm ground than that for buildings located on soft soil.

Keywords: Life-cycle cost, optimal load factors, reinforced concrete buildings, total costs, type of soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 900
37 Methane and Other Hydrocarbon Gas Emissions Resulting from Flaring in Kuwait Oilfields

Authors: Khaireyah Kh. Al-Hamad, V. Nassehi, A. R. Khan

Abstract:

Air pollution is a major environmental health problem, affecting developed and developing countries around the world. Increasing amounts of potentially harmful gases and particulate matter are being emitted into the atmosphere on a global scale, resulting in damage to human health and the environment. Petroleum-related air pollutants can have a wide variety of adverse environmental impacts. In the crude oil production sectors, there is a strong need for a thorough knowledge of gaseous emissions resulting from the flaring of associated gas of known composition on daily basis through combustion activities under several operating conditions. This can help in the control of gaseous emission from flares and thus in the protection of their immediate and distant surrounding against environmental degradation. The impacts of methane and non-methane hydrocarbons emissions from flaring activities at oil production facilities at Kuwait Oilfields have been assessed through a screening study using records of flaring operations taken at the gas and oil production sites, and by analyzing available meteorological and air quality data measured at stations located near anthropogenic sources. In the present study the Industrial Source Complex (ISCST3) Dispersion Model is used to calculate the ground level concentrations of methane and nonmethane hydrocarbons emitted due to flaring in all over Kuwait Oilfields. The simulation of real hourly air quality in and around oil production facilities in the State of Kuwait for the year 2006, inserting the respective source emission data into the ISCST3 software indicates that the levels of non-methane hydrocarbons from the flaring activities exceed the allowable ambient air standard set by Kuwait EPA. So, there is a strong need to address this acute problem to minimize the impact of methane and non-methane hydrocarbons released from flaring activities over the urban area of Kuwait.

Keywords: Kuwait Oilfields, ISCST3 model, flaring, Airpollution, Methane and Non-methane.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2058
36 A Retrospective Cross-Sectional Study on the Prevalence and Factors Associated with Virological Non-Suppression among HIV-Positive Adult Patients on Antiretroviral Therapy in Woliso Town, Oromia, Ethiopia

Authors: Teka Haile, Behailu Hawulte, Solomon Alemayehu

Abstract:

Background: HIV virological failure still remains a problem in HV/AIDS treatment and care. This study aimed to describe the prevalence and identify the factors associated with viral non-suppression among HIV-positive adult patients on antiretroviral therapy in Woliso Town, Oromia, Ethiopia. Methods: A retrospective cross-sectional study was conducted among 424 HIV-positive patient’s attending antiretroviral therapy (ART) in Woliso Town during the period from August 25, 2020 to August 30, 2020. Data collected from patient medical records were entered into Epi Info version 2.3.2.1 and exported to SPSS version 21.0 for analysis. Logistic regression analysis was done to identify factors associated with viral load non-suppression, and statistical significance of odds ratios were declared using 95% confidence interval and p-value < 0.05. Results: A total of 424 patients were included in this study. The mean age (± SD) of the study participants was 39.88 (± 9.995) years. The prevalence of HIV viral load non-suppression was 55 (13.0%) with 95% CI (9.9-16.5). Second-line ART treatment regimen (Adjusted Odds Ratio (AOR) = 8.98, 95% Confidence Interval (CI): 2.64, 30.58) and routine viral load testing (AOR = 0.01, 95% CI: 0.001, 0.02) were significantly associated with virological non-suppression. Conclusion: Virological non-suppression was high, which hinders the achievement of the third global 95 target. The second-line regimen and routine viral load testing were significantly associated with virological non-suppression. It suggests the need to assess the effectiveness of antiretroviral drugs for epidemic control. It also clearly shows the need to decentralize third-line ART treatment for those patients in need.

Keywords: Virological non-suppression, HIV-positive, ART, Woliso Town, Ethiopia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 587
35 Clustering for Detection of Population Groups at Risk from Anticholinergic Medication

Authors: Amirali Shirazibeheshti, Tarik Radwan, Alireza Ettefaghian, Farbod Khanizadeh, George Wilson, Cristina Luca

Abstract:

Anticholinergic medication has been associated with events such as falls, delirium, and cognitive impairment in older patients. To further assess this, anticholinergic burden scores have been developed to quantify risk. A risk model based on clustering was deployed in a healthcare management system to cluster patients into multiple risk groups according to anticholinergic burden scores of multiple medicines prescribed to patients to facilitate clinical decision-making. To do so, anticholinergic burden scores of drugs were extracted from the literature which categorizes the risk on a scale of 1 to 3. Given the patients’ prescription data on the healthcare database, a weighted anticholinergic risk score was derived per patient based on the prescription of multiple anticholinergic drugs. This study was conducted on 300,000 records of patients currently registered with a major regional UK-based healthcare provider. The weighted risk scores were used as inputs to an unsupervised learning algorithm (mean-shift clustering) that groups patients into clusters that represent different levels of anticholinergic risk. This work evaluates the association between the average risk score and measures of socioeconomic status (index of multiple deprivation) and health (index of health and disability). The clustering identifies a group of 15 patients at the highest risk from multiple anticholinergic medication. Our findings show that this group of patients is located within more deprived areas of London compared to the population of other risk groups. Furthermore, the prescription of anticholinergic medicines is more skewed to female than male patients, suggesting that females are more at risk from this kind of multiple medication. The risk may be monitored and controlled in a healthcare management system that is well-equipped with tools implementing appropriate techniques of artificial intelligence.

Keywords: Anticholinergic medication, socioeconomic status, deprivation, clustering, risk analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1069
34 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: Artificial Neural Networks, ANNs, classifier algorithms, credit risk assessment, logistic regression, machine learning, support vector machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1281
33 A Secure Auditing Framework for Load Balancing in Cloud Environment

Authors: R. Geetha, T. Padmavathy

Abstract:

Security audit is an important aspect or feature to be considered in cloud service customer. It is basically a certification process to audit the controls that deliver the security requirements. Security audits are conducted by trained and qualified staffs that belong to an independent auditing organization. Security audits must be carried as a standard of security controls. Proper check to be made that the cloud user has a proper reporting and logging facilities with the customer's system and hence ensuring appropriate business and operational flow of data through cloud service. We propose a cloud-based secure auditing framework, which enables confided in power to safely store their mystery information on the semi-believed cloud specialist co-ops, and specifically share their mystery information with a wide scope of information recipient, to diminish the key administration intricacy for power proprietors and information collectors. Unique in relation to past cloud-based information framework, data proprietors transfer their mystery information into cloud utilizing static and dynamic evaluating plan. Another propelled determination is, if any information beneficiary needs individual record to download, the information collector will send the solicitation to the expert. The specialist proprietor has the Access Control. At the off probability, the businessman must impart the primary record to the knowledge collector, acknowledge statistics beneficiary solicitation. Once the acknowledgement for the records is over, the recipient downloads the first record and this record shifting time with date and downloading time with date are monitored by the inspector. In addition to deduplication concept, diminished cloud memory area using dynamic document distribution has been proposed.

Keywords: Cloud computing, cloud storage auditing, data integrity, key exposure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1168
32 IT Systems of the US Federal Courts, Justice, and Governance

Authors: Joseph Zernik

Abstract:

Validity, integrity, and impacts of the IT systems of the US federal courts have been studied as part of the Human Rights Alert-NGO (HRA) submission for the 2015 Universal Periodic Review (UPR) of human rights in the United States by the Human Rights Council (HRC) of the United Nations (UN). The current report includes overview of IT system analysis, data-mining and case studies. System analysis and data-mining show: Development and implementation with no lawful authority, servers of unverified identity, invalidity in implementation of electronic signatures, authentication instruments and procedures, authorities and permissions; discrimination in access against the public and unrepresented (pro se) parties and in favor of attorneys; widespread publication of invalid judicial records and dockets, leading to their false representation and false enforcement. A series of case studies documents the impacts on individuals' human rights, on banking regulation, and on international matters. Significance is discussed in the context of various media and expert reports, which opine unprecedented corruption of the US justice system today, and which question, whether the US Constitution was in fact suspended. Similar findings were previously reported in IT systems of the State of California and the State of Israel, which were incorporated, subject to professional HRC staff review, into the UN UPR reports (2010 and 2013). Solutions are proposed, based on the principles of publicity of the law and the separation of power: Reliance on US IT and legal experts under accountability to the legislative branch, enhancing transparency, ongoing vigilance by human rights and internet activists. IT experts should assume more prominent civic duties in the safeguard of civil society in our era.

Keywords: E-justice, federal courts, United States, human rights, banking regulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2149
31 Comparison of Statins Dose Intensity on HbA1c Control in Outpatients with Type 2 Diabetes: A Prospective Cohort Study

Authors: Mohamed A. Hammad, Dzul Azri Mohamed Noor, Syed Azhar Syed Sulaiman, Ahmed A. Khamis, Abeer Kharshid, Nor Azizah Aziz

Abstract:

The effect of statins dose intensity (SDI) on glycemic control in patients with existing diabetes is unclear. Also, there are many contradictory findings were reported in the literature; thus, it is limiting the possibility to draw conclusions. This project was designed to compare the effect of SDI on glycated hemoglobin (HbA1c%) control in outpatients with Type 2 diabetes in the endocrine clinic at Hospital Pulau Pinang, Malaysia, between July 2015 and August 2016. A prospective cohort study was conducted, where records of 345 patients with Type 2 diabetes (Moderate-SDI group 289 patients and high-SDI cohort 56 patients) were reviewed to identify demographics and laboratory tests. The target of glycemic control (HbA1c < 7% for patient < 65 years, and < 8% for patient ≥ 65 years) was estimated, and the results were presented as descriptive statistics. From 289 moderate-SDI cohorts with a mean age of 57.3 ± 12.4 years, only 86 (29.8%) cases were shown to have controlled glycemia, while there were 203 (70.2%) cases with uncontrolled glycemia with confidence interval (CI) of 95% (6.2–10.8). On the other hand, the high-SDI group of 56 patients with Type 2 diabetes with a mean age 57.7±12.4 years is distributed among 11 (19.6%) patients with controlled diabetes, and 45 (80.4%) of them had uncontrolled glycemia, CI: 95% (7.1–11.9). The study has demonstrated that the relative risk (RR) of uncontrolled glycemia in patients with Type 2 diabetes that used high-SDI is 1.15, and the excessive relative risk (ERR) is 15%. The absolute risk (AR) is 10.2%, and the number needed to harm (NNH) is 10. Outpatients with Type 2 diabetes who use high-SDI of statin have a higher risk of uncontrolled glycemia than outpatients who had been treated with a moderate-SDI.

Keywords: Cohort study, diabetes control, dose intensity, HbA1c, Malaysia, statin, Type 2 diabetes mellitus, uncontrolled glycemia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1463
30 Vancomycin and Rifaximin Combination Therapy for Diarrhoea Predominant Irritable Bowel Syndrome: An Observational Study

Authors: P. Murphy, D. Vasic, A. W. Gunaratne, T. Tugonon, M. Ison, C. Pagonis, E. T. Sitchon, A. Le Busque, T. J. Borody

Abstract:

Irritable bowel syndrome (IBS) is a gastrointestinal disorder characterized by an alteration in bowel movements. There are three different types of IBS: diarrhea-predominant IBS (IBS-D), constipation-predominant IBS (IBS-C) and IBS with mixed bowel habit (IBS-M). Antimicrobials are increasingly being used as treatment for all types of IBS. Due to this increased use and subsequent success, the gut microbiome as a factor in the etiology of IBS is becoming more apparent. Accepted standard treatment has focused on IBS-C and involves either vancomycin or rifaximin. Here, we report on a cohort of 18 patients treated with both vancomycin and rifaximin for IBS-D. These patients’ records were reviewed retrospectively. In this cohort, patients were aged between 24-74 years (mean 44 years) and nine were female. At baseline all patients had diarrhea, four with mucus and one with blood. Other reported symptoms include abdominal pain (n = 11) bloating (n = 9), flatulence (n = 7), fatigue (n = 4) and nausea (n = 3). Patient’s treatments were personalized according to their symptom severity and tolerability and were treated with a combination of rifaximin (500-3000 mg/d) and vancomycin (500 mg-1500 mg/d) for an ongoing period. Follow-ups were conducted between 2-32 weeks. Of all patients, 89% reported improvement of at least 1 symptom, one reported no change and one patient’s symptoms got worse. The success of this combination treatment could be due to the different mechanisms of action undertaken by each medication. Vancomycin works by inhibiting the cell wall of the bacteria and rifaximin by inhibiting protein synthesis. This success in treatment validates the idea that IBS-D may be driven by a bacterial infection of the gastrointestinal microbiome. As IBS-D presents similarly to Clostridium difficile and symptom improvement can occur with the same treatment as Clostridium difficile of rifaximin and vancomycin, there is reason to suggest that the infectious agent could be an unidentified strain of Clostridium. Although these results offer some validity to the theory, more research is required.

Keywords: Clostridium difficile infection, diarrhea predominant irritable bowel syndrome, microbiome, vancomycin/rifaximin combination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 350
29 The Effects of Placement and Cross-Section Shape of Shear Walls in Multi-Story RC Buildings with Plan Irregularity on Their Seismic Behavior by Using Nonlinear Time History Analyses

Authors: Mohammad Aminnia, Mahmood Hosseini

Abstract:

Environmental and functional conditions, sometimes, necessitate the architectural plan of the building to be asymmetric, and this result in an asymmetric structure. In such cases finding an optimal pattern for locating the components of lateral load bearing system, including shear walls, in the building’s plan is desired. In case of shear wall in addition to the location the shape of the wall cross-section is also an effective factor. Various types of shear walls and their proper layout might come effective in better stiffness distribution and more appropriate seismic response of the building. Several studies have been conducted in the context of analysis and design of shear walls; however, few studies have been performed on making decisions for the location and form of shear walls in multistory buildings, especially those with irregular plan. In this study, an attempt has been made to obtain the most reliable seismic behavior of multi-story reinforced concrete vertically chamfered buildings by using more appropriate shear walls form and arrangement in 7-, 10-, 12-, and 15-stoy buildings. The considered forms and arrangements include common rectangular walls and L-, T-, U- and Z-shaped plan, located as the core or in the outer frames of the building structure. Comparison of seismic behaviors of the buildings, including maximum roof displacement and particularly formation of plastic hinges and their distribution in the buildings’ structures, have been done based on the results of a series of nonlinear time history analyses, by using a set of selected earthquake records. Results show that shear walls with U-shaped cross-section, placed as the building central core, and also walls with Z-shaped cross-section, placed at the corners give the building more reliable seismic behavior.

Keywords: Vertically chamfered buildings, non-linear time history analyses, L-, T-, U- and Z-shaped plan walls.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2927