Search results for: encrypted traffic classification
164 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Authors: T. Thein, S. Kalyar Myo
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)
Procedia PDF Downloads 286163 Quality Assessment of Pedestrian Streets in Iran: Case Study of Saf, Tehran
Authors: Fstemeh Rais Esmaili, Ehsan Ranjbar
Abstract:
Pedestrian streets as one type of urban public spaces have an important role in improving the quality of urban life. In Iran, planning and designing of pedestrian streets is in its primary steps. In spite of starting this approach in Iran, and designing several pedestrian streets, there are still not organized studies about quality assessment of pedestrian streets. As a result, the strength and weakness points of the initial experiences have not been utilized. This inattention to quality assessment have caused designing pedestrian streets to be limited to just vehicles traffic control and preliminary actions like paving; so that, special potentials of pedestrian streets for creating social, livable and dynamic public spaces have not been used. This article, as an organized study about quality assessment of pedestrian streets in Iran, tries to reach two main goals: first, introducing a framework for quality assessment of pedestrian streets in Iran, and second, creating a context for improving the quality of pedestrian streets especially for further experiences. The main research methods are description and context analyzing. With respect to comparative analysis of ideas about quality, considering international and local case studies and analyzing existing condition of Saf Pedestrian Street, a particular model for quality assessment has been introduced. In this model, main components and assessment criteria have been presented. On the basis of this model, questionnaire and checklist for assessment have been prepared. The questionnaire and interview have been used to assess qualities which are in direct contact with people and the checklist has been used for analyzing visual qualities by authors through observation. Some results of questionnaire and checklist show that 7 of 11 primary components, diversity, flexibility, cleanness, legibility and imaginably, identity, livability, form and physical setting are rated low and very low in quality degree. Three components, efficiency, comfort and distinctiveness, have medium and low quality degree and one component, access, linkage and permeability has high quality degree. Therefore, based on implemented analyzing process, Saf Pedestrian Street needs to be improved and these quality improvement priorities are determined based on presented criteria. Adaption of final results with existing condition illustrates the shortage of services for satisfying user’s needs, inflexibility and impossibility of using spaces in various times, lack of facilities for different climatic conditions, lack of facilities such as drinking fountain, inappropriate designing of existing urban furniture like garbage cans, and creating pollution and unsuitable view, lack of visual attractions, neglecting disabled persons in designing entrances, shortage of benches and their undesirable designing, lack of vegetation, absence of special characters making it different from other streets, preventing people taking part in the space causing lack of affiliation, lack of appropriate elements for leisure time and lack of exhilaration in the space. On the other hand, these results present high access and permeability, high safety, less sound pollution and more relief, comfortable movement along the way due to suitable pavement and economic efficiency, as the strength points of Saf pedestrian street.Keywords: pedestrian streets, quality assessment, quality criteria, Saf Pedestrian Street
Procedia PDF Downloads 256162 Development and Obtaining of Solid Dispersions to Increase the Solubility of Efavirenz in Anti-HIV Therapy
Authors: Salvana P. M. Costa, Tarcyla A. Gomes, Giovanna C. R. M. Schver, Leslie R. M. Ferraz, Cristovão R. Silva, Magaly A. M. Lyra, Danilo A. F. Fonte, Larissa A. Rolim, Amanda C. Q. M. Vieira, Miracy M. Albuquerque, Pedro J. Rolim-neto
Abstract:
Efavirenz (EFV) is considered one of the most widely used anti-HIV drugs. However, it is classified as a drug class II (poorly soluble, highly permeable) according to the biopharmaceutical classification system, presenting problems of absorption in the gastrointestinal tract and thereby inadequate bioavailability for its therapeutic action. This study aimed to overcome these barriers by developing and obtaining solid dispersions (SD) in order to increase the EFZ bioavailability. For the development of SD with EFV, theoretical and practical studies were initially performed. Thus, there was a choice of a carrier to be used. For this, it was analyzed the various criteria such as glass transition temperature of the polymer, intra- and intermolecular interactions of hydrogen bonds between drug and polymer, the miscibility between the polymer and EFV. The choice of the obtainment method of the SD came from the analysis of which method is the most consolidated in both industry and literature. Subsequently, the choice of drug and carrier concentrations in the dispersions was carried out. In order to obtain DS to present the drug in its amorphous form, as the DS were obtained, they were analyzed by X-ray diffraction (XRD). SD are more stable the higher the amount of polymer present in the formulation. With this assumption, a SD containing 10% of drug was initially prepared and then this proportion was increased until the XRD showed the presence of EFV in its crystalline form. From this point, it was not produced SD with a higher concentration of drug. Thus, it was allowed to select PVP-K30, PVPVA 64 and the SOLUPLUS formulation as carriers, once it was possible the formation of hydrogen bond between EFV and polymers since these have hydrogen acceptor groups capable of interacting with the donor group of the drug hydrogen. It is worth mentioning also that the films obtained, independent of concentration used, were presented homogeneous and transparent. Thus, it can be said that the EFV is miscible in the three polymers used in the study. The SD and Physical Mixtures (PM) with these polymers were prepared by the solvent method. The EFV diffraction profile showed main peaks at around 2θ of 6,24°, in addition to other minor peaks at 14,34°, 17,08°, 20,3°, 21,36° and 25,06°, evidencing its crystalline character. Furthermore, the polymers showed amorphous nature, as evidenced by the absence of peaks in their XRD patterns. The XRD patterns showed the PM overlapping profile of the drug with the polymer, indicating the presence of EFV in its crystalline form. Regardless the proportion of drug used in SD, all the samples showed the same characteristics with no diffraction peaks EFV, demonstrating the behavior amorphous products. Thus, the polymers enabled, effectively, the formation of amorphous SD, probably due to the potential hydrogen bonds between them and the drug. Moreover, the XRD analysis showed that the polymers were able to maintain its amorphous form in a concentration of up to 80% drug.Keywords: amorphous form, Efavirenz, solid dispersions, solubility
Procedia PDF Downloads 571161 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs
Authors: Michela Quadrini
Abstract:
Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.Keywords: chord diagrams, linear chord diagram, equivalence class, topological language
Procedia PDF Downloads 203160 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach
Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier
Abstract:
Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube
Procedia PDF Downloads 157159 Intelligent Campus Monitoring: YOLOv8-Based High-Accuracy Activity Recognition
Authors: A. Degale Desta, Tamirat Kebamo
Abstract:
Background: Recent advances in computer vision and pattern recognition have significantly improved activity recognition through video analysis, particularly with the application of Deep Convolutional Neural Networks (CNNs). One-stage detectors now enable efficient video-based recognition by simultaneously predicting object categories and locations. Such advancements are highly relevant in educational settings where CCTV surveillance could automatically monitor academic activities, enhancing security and classroom management. However, current datasets and recognition systems lack the specific focus on campus environments necessary for practical application in these settings.Objective: This study aims to address this gap by developing a dataset and testing an automated activity recognition system specifically tailored for educational campuses. The EthioCAD dataset was created to capture various classroom activities and teacher-student interactions, facilitating reliable recognition of academic activities using deep learning models. Method: EthioCAD, a novel video-based dataset, was created with a design science research approach to encompass teacher-student interactions across three domains and 18 distinct classroom activities. Using the Roboflow AI framework, the data was processed, with 4.224 KB of frames and 33.485 MB of images managed for frame extraction, labeling, and organization. The Ultralytics YOLOv8 model was then implemented within Google Colab to evaluate the dataset’s effectiveness, achieving high mean Average Precision (mAP) scores. Results: The YOLOv8 model demonstrated robust activity recognition within campus-like settings, achieving an mAP50 of 90.2% and an mAP50-95 of 78.6%. These results highlight the potential of EthioCAD, combined with YOLOv8, to provide reliable detection and classification of classroom activities, supporting automated surveillance needs on educational campuses. Discussion: The high performance of YOLOv8 on the EthioCAD dataset suggests that automated activity recognition for surveillance is feasible within educational environments. This system addresses current limitations in campus-specific data and tools, offering a tailored solution for academic monitoring that could enhance the effectiveness of CCTV systems in these settings. Conclusion: The EthioCAD dataset, alongside the YOLOv8 model, provides a promising framework for automated campus activity recognition. This approach lays the groundwork for future advancements in CCTV-based educational surveillance systems, enabling more refined and reliable monitoring of classroom activities.Keywords: deep CNN, EthioCAD, deep learning, YOLOv8, activity recognition
Procedia PDF Downloads 18158 Motivation of Doctors and its Impact on the Quality of Working Life
Authors: E. V. Fakhrutdinova, K. R. Maksimova, P. B. Chursin
Abstract:
At the present stage of the society progress the health care is an integral part of both the economic system and social, while in the second case the medicine is a major component of a number of basic and necessary social programs. Since the foundation of the health system are highly qualified health professionals, it is logical proposition that increase of doctor`s professionalism improves the effectiveness of the system as a whole. Professionalism of the doctor is a collection of many components, essential role played by such personal-psychological factors as honesty, willingness and desire to help people, and motivation. A number of researchers consider motivation as an expression of basic human needs that have passed through the “filter” which is a worldview and values learned in the process of socialization by the individual, to commit certain actions designed to achieve the expected result. From this point of view a number of researchers propose the following classification of highly skilled employee’s needs: 1. the need for confirmation the competence (setting goals that meet the professionalism and receipt of positive emotions in their decision), 2. The need for independence (the ability to make their own choices in contentious situations arising in the process carry out specialist functions), 3. The need for ownership (in the case of health care workers, to the profession and accordingly, high in the eyes of the public status of the doctor). Nevertheless, it is important to understand that in a market economy a significant motivator for physicians (both legal and natural persons) is to maximize its own profits. In the case of health professionals duality motivational structure creates an additional contrast, as in the public mind the image of the ideal physician; usually a altruistically minded person thinking is not primarily about their own benefit, and to assist others. In this context, the question of the real motivation of health workers deserves special attention. The survey conducted by the American researcher Harrison Terni for the magazine "Med Tech" in 2010 revealed the opinion of more than 200 medical students starting courses, and the primary motivation in a profession choice is "desire to help people", only 15% said that they want become a doctor, "to earn a lot". From the point of view of most of the classical theories of motivation this trend can be called positive, as intangible incentives are more effective. However, it is likely that over time the opinion of the respondents may change in the direction of mercantile motives. Thus, it is logical to assume that well-designed system of motivation of doctor`s labor should be based on motivational foundations laid during training in higher education.Keywords: motivation, quality of working life, health system, personal-psychological factors, motivational structure
Procedia PDF Downloads 360157 Association of Brain Derived Neurotrophic Factor with Iron as well as Vitamin D, Folate and Cobalamin in Pediatric Metabolic Syndrome
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
The impact of metabolic syndrome (MetS) on cognition and functions of the brain is being investigated. Iron deficiency and deficiencies of B9 (folate) as well as B12 (cobalamin) vitamins are best-known nutritional anemias. They are associated with cognitive disorders and learning difficulties. The antidepressant effects of vitamin D are known and the deficiency state affects mental functions negatively. The aim of this study is to investigate possible correlations of MetS with serum brain-derived neurotrophic factor (BDNF), iron, folate, cobalamin and vitamin D in pediatric patients. 30 children, whose age- and sex-dependent body mass index (BMI) percentiles vary between 85 and 15, 60 morbid obese children with above 99th percentiles constituted the study population. Anthropometric measurements were taken. BMI values were calculated. Age- and sex-dependent BMI percentile values were obtained using the appropriate tables prepared by the World Health Organization (WHO). Obesity classification was performed according to WHO criteria. Those with MetS were evaluated according to MetS criteria. Serum BDNF was determined by enzyme-linked immunosorbent assay. Serum folate was analyzed by an immunoassay analyzer. Serum cobalamin concentrations were measured using electrochemiluminescence immunoassay. Vitamin D status was determined by the measurement of 25-hydroxycholecalciferol [25-hydroxy vitamin D3, 25(OH)D] using high performance liquid chromatography. Statistical evaluations were performed using SPSS for Windows, version 16. The p values less than 0.05 were accepted as statistically significant. Although statistically insignificant, lower folate and cobalamin values were found in MO children compared to those observed for children with normal BMI. For iron and BDNF values, no alterations were detected among the groups. Significantly decreased vitamin D concentrations were noted in MO children with MetS in comparison with those in children with normal BMI (p ≤ 0.05). The positive correlation observed between iron and BDNF in normal-BMI group was not found in two MO groups. In THE MetS group, the partial correlation among iron, BDNF, folate, cobalamin, vitamin D controlling for waist circumference and BMI was r = -0.501; p ≤ 0.05. None was calculated in MO and normal BMI groups. In conclusion, vitamin D should also be considered during the assessment of pediatric MetS. Waist circumference and BMI should collectively be evaluated during the evaluation of MetS in children. Within this context, BDNF appears to be a key biochemical parameter during the examination of obesity degree in terms of mental functions, cognition and learning capacity. The association observed between iron and BDNF in children with normal BMI was not detected in MO groups possibly due to development of inflammation and other obesity-related pathologies. It was suggested that this finding may contribute to mental function impairments commonly observed among obese children.Keywords: brain-derived neurotrophic factor, iron, vitamin B9, vitamin B12, vitamin D
Procedia PDF Downloads 122156 Patterns of TV Simultaneous Interpreting of Emotive Overtones in Trump’s Victory Speech from English into Arabic
Authors: Hanan Al-Jabri
Abstract:
Simultaneous interpreting is deemed to be the most challenging mode of interpreting by many scholars. The special constraints involved in this task including time constraints, different linguistic systems, and stress pose a great challenge to most interpreters. These constraints are likely to maximise when the interpreting task is done live on TV. The TV interpreter is exposed to a wide variety of audiences with different backgrounds and needs and is mostly asked to interpret high profile tasks which raise his/her levels of stress, which further complicate the task. Under these constraints, which require fast and efficient performance, TV interpreters of four TV channels were asked to render Trump's victory speech into Arabic. However, they had also to deal with the burden of rendering English emotive overtones employed by the speaker into a whole different linguistic system. The current study aims at investigating the way TV interpreters, who worked in the simultaneous mode, handled this task; it aims at exploring and evaluating the TV interpreters’ linguistic choices and whether the original emotive effect was maintained, upgraded, downgraded or abandoned in their renditions. It also aims at exploring the possible difficulties and challenges that emerged during this process and might have influenced the interpreters’ linguistic choices. To achieve its aims, the study analysed Trump’s victory speech delivered on November 6, 2016, along with four Arabic simultaneous interpretations produced by four TV channels: Al-Jazeera, RT, CBC News, and France 24. The analysis of the study relied on two frameworks: a macro and a micro framework. The former presents an overview of the wider context of the English speech as well as an overview of the speaker and his political background to help understand the linguistic choices he made in the speech, and the latter framework investigates the linguistic tools which were employed by the speaker to stir people’s emotions. These tools were investigated based on Shamaa’s (1978) classification of emotive meaning according to their linguistic level: phonological, morphological, syntactic, and semantic and lexical levels. Moreover, this level investigates the patterns of rendition which were detected in the Arabic deliveries. The results of the study identified different rendition patterns in the Arabic deliveries, including parallel rendition, approximation, condensation, elaboration, transformation, expansion, generalisation, explicitation, paraphrase, and omission. The emerging patterns, as suggested by the analysis, were influenced by factors such as speedy and continuous delivery of some stretches, and highly-dense segments among other factors. The study aims to contribute to a better understanding of TV simultaneous interpreting between English and Arabic, as well as the practices of TV interpreters when rendering emotiveness especially that little is known about interpreting practices in the field of TV, particularly between Arabic and English.Keywords: emotive overtones, interpreting strategies, political speeches, TV interpreting
Procedia PDF Downloads 162155 Spatial Distribution of Land Use in the North Canal of Beijing Subsidiary Center and Its Impact on the Water Quality
Authors: Alisa Salimova, Jiane Zuo, Christopher Homer
Abstract:
The objective of this study is to analyse the North Canal riparian zone land use with the help of remote sensing analysis in ArcGis using 30 cloudless Landsat8 open-source satellite images from May to August of 2013 and 2017. Land cover, urban construction, heat island effect, vegetation cover, and water system change were chosen as the main parameters and further analysed to evaluate its impact on the North Canal water quality. The methodology involved the following steps: firstly, 30 cloudless satellite images were collected from the Landsat TM image open-source database. The visual interpretation method was used to determine different land types in a catchment area. After primary and secondary classification, 28 land cover types in total were classified. Visual interpretation method was used with the help ArcGIS for the grassland monitoring, US Landsat TM remote sensing image processing with a resolution of 30 meters was used to analyse the vegetation cover. The water system was analysed using the visual interpretation method on the GIS software platform to decode the target area, water use and coverage. Monthly measurements of water temperature, pH, BOD, COD, ammonia nitrogen, total nitrogen and total phosphorus in 2013 and 2017 were taken from three locations of the North Canal in Tongzhou district. These parameters were used for water quality index calculation and compared to land-use changes. The results of this research were promising. The vegetation coverage of North Canal riparian zone in 2017 was higher than the vegetation coverage in 2013. The surface brightness temperature value was positively correlated with the vegetation coverage density and the distance from the surface of the water bodies. This indicates that the vegetation coverage and water system have a great effect on temperature regulation and urban heat island effect. Surface temperature in 2017 was higher than in 2013, indicating a global warming effect. The water volume in the river area has been partially reduced, indicating the potential water scarcity risk in North Canal watershed. Between 2013 and 2017, urban residential, industrial and mining storage land areas significantly increased compared to other land use types; however, water quality has significantly improved in 2017 compared to 2013. This observation indicates that the Tongzhou Water Restoration Plan showed positive results and water management of Tongzhou district had been improved.Keywords: North Canal, land use, riparian vegetation, river ecology, remote sensing
Procedia PDF Downloads 116154 Fire Risk Information Harmonization for Transboundary Fire Events between Portugal and Spain
Authors: Domingos Viegas, Miguel Almeida, Carmen Rocha, Ilda Novo, Yolanda Luna
Abstract:
Forest fires along the more than 1200km of the Spanish-Portuguese border are more and more frequent, currently achieving around 2000 fire events per year. Some of these events develop to large international wildfire requiring concerted operations based on shared information between the two countries. The fire event of Valencia de Alcantara (2003) causing several fatalities and more than 13000ha burnt, is a reference example of these international events. Currently, Portugal and Spain have a specific cross-border cooperation protocol on wildfires response for a strip of about 30km (15 km for each side). It is recognized by public authorities the successfulness of this collaboration however it is also assumed that this cooperation should include more functionalities such as the development of a common risk information system for transboundary fire events. Since Portuguese and Spanish authorities use different approaches to determine the fire risk indexes inputs and different methodologies to assess the fire risk, sometimes the conjoint firefighting operations are jeopardized since the information is not harmonized and the understanding of the situation by the civil protection agents from both countries is not unique. Thus, a methodology aiming the harmonization of the fire risk calculation and perception by Portuguese and Spanish Civil protection authorities is hereby presented. The final results are presented as well. The fire risk index used in this work is the Canadian Fire Weather Index (FWI), which is based on meteorological data. The FWI is limited on its application as it does not take into account other important factors with great effect on the fire appearance and development. The combination of these factors is very complex since, besides the meteorology, it addresses several parameters of different topics, namely: sociology, topography, vegetation and soil cover. Therefore, the meaning of FWI values is different from region to region, according the specific characteristics of each region. In this work, a methodology for FWI calibration based on the number of fire occurrences and on the burnt area in the transboundary regions of Portugal and Spain, in order to assess the fire risk based on calibrated FWI values, is proposed. As previously mentioned, the cooperative firefighting operations require a common perception of the information shared. Therefore, a common classification of the fire risk for the fire events occurred in the transboundary strip is proposed with the objective of harmonizing this type of information. This work is integrated in the ECHO project SpitFire - Spanish-Portuguese Meteorological Information System for Transboundary Operations in Forest Fires, which aims the development of a web platform for the sharing of information and supporting decision tools to be used in international fire events involving Portugal and Spain.Keywords: data harmonization, FWI, international collaboration, transboundary wildfires
Procedia PDF Downloads 254153 Avoidance of Brittle Fracture in Bridge Bearings: Brittle Fracture Tests and Initial Crack Size
Authors: Natalie Hoyer
Abstract:
Bridges in both roadway and railway systems depend on bearings to ensure extended service life and functionality. These bearings enable proper load distribution from the superstructure to the substructure while permitting controlled movement of the superstructure. The design of bridge bearings, according to Eurocode DIN EN 1337 and the relevant sections of DIN EN 1993, increasingly requires the use of thick plates, especially for long-span bridges. However, these plate thicknesses exceed the limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with DIN EN 1993-1-10 regulations regarding material toughness and through-thickness properties necessitates further modifications. Consequently, these standards cannot be directly applied to the selection of bearing materials without supplementary guidance and design rules. In this context, a recommendation was developed in 2011 to regulate the selection of appropriate steel grades for bearing components. Prior to the initiation of the research project underlying this contribution, this recommendation had only been available as a technical bulletin. Since July 2023, it has been integrated into guideline 804 of the German railway. However, recent findings indicate that certain bridge-bearing components are exposed to high fatigue loads, which necessitate consideration in structural design, material selection, and calculations. Therefore, the German Centre for Rail Traffic Research called a research project with the objective of defining a proposal to expand the current standards in order to implement a sufficient choice of steel material for bridge bearings to avoid brittle fracture, even for thick plates and components subjected to specific fatigue loads. The results obtained from theoretical considerations, such as finite element simulations and analytical calculations, are validated through large-scale component tests. Additionally, experimental observations are used to calibrate the calculation models and modify the input parameters of the design concept. Within the large-scale component tests, a brittle failure is artificially induced in a bearing component. For this purpose, an artificially generated initial defect is introduced at the previously defined hotspot into the specimen using spark erosion. Then, a dynamic load is applied until the crack initiation process occurs to achieve realistic conditions in the form of a sharp notch similar to a fatigue crack. This initiation process continues until the crack length reaches a predetermined size. Afterward, the actual test begins, which requires cooling the specimen with liquid nitrogen until a temperature is reached where brittle fracture failure is expected. In the next step, the component is subjected to a quasi-static tensile test until failure occurs in the form of a brittle failure. The proposed paper will present the latest research findings, including the results of the conducted component tests and the derived definition of the initial crack size in bridge bearings.Keywords: bridge bearings, brittle fracture, fatigue, initial crack size, large-scale tests
Procedia PDF Downloads 47152 Analyzing the Construction of Collective Memories by History Movies/TV Programs: Case Study of Masters in the Forbidden City
Authors: Lulu Wang, Yongjun Xu, Xiaoyang Qiao
Abstract:
The Forbidden City is well known for being full of Chinese cultural and historical relics. However, the Masters in the Forbidden City, a documentary film, doesn’t just dwell on the stories of the past. Instead, it focuses on ordinary people—the restorers of the relics and antiquities, which has caught the sight of Chinese audiences. From this popular documentary film, a new way can be considered, that is to show the relics, antiquities and painting with a character of modern humanities by films and TV programs. Of course, it can’t just like a simple explanation from tour guides in museums. It should be a perfect combination of scenes, heritages, stories, storytellers and background music. All we want to do is trying to dig up the humanity behind the heritages and then create a virtual scene for the audience to have emotional resonance from the humanity. It is believed that there are two problems. One is that compared with the entertainment shows, why people prefer to see the boring restoration work. The other is that what the interaction is between those history documentary films, the heritages, the audiences and collective memory. This paper mainly used the methods of text analysis and data analysis. The audiences’ comment texts were collected from all kinds of popular video sites. Through analyzing those texts, there was a word cloud chart about people preferring to use what kind of words to comment the film. Then the usage rate of all comments words was calculated. After that, there was a Radar Chart to show the rank results. Eventually, each of them was given an emotional value classification according their comment tone and content. Based on the above analysis results, an interaction model among the audience, history films/TV programs and the collective memory can be summarized. According to the word cloud chart, people prefer to use such words to comment, including moving, history, love, family, celebrity, tone... From those emotional words, we can see Chinese audience felt so proud and shared the sense of Collective Identity, so they leave such comments: To our great motherland! Chinese traditional culture is really profound! It is found that in the construction of collective memory symbology, the films formed an imaginary system by organizing a ‘personalized audience’. The audience is not just a recipient of information, but a participant of the documentary films and a cooperator of collective memory. At the same time, it is believed that the traditional background music, the spectacular present scenes and the tone of the storytellers/hosts are also important, so it is suggested that the museums could try to cooperate with the producers of movie and TV program to create a vivid scene for the people. Maybe it’s a more artistic way for heritages to be open to all the world.Keywords: audience, heritages, history movies, TV programs
Procedia PDF Downloads 165151 The Cooperation among Insulin, Cortisol and Thyroid Hormones in Morbid Obese Children and Metabolic Syndrome
Authors: Orkide Donma, Mustafa M. Donma
Abstract:
Obesity, a disease associated with a low-grade inflammation, is a risk factor for the development of metabolic syndrome (MetS). So far, MetS risk factors such as parameters related to glucose and lipid metabolisms as well as blood pressure were considered for the evaluation of this disease. There are still some ambiguities related to the characteristic features of MetS observed particularly in pediatric population. Hormonal imbalance is also important, and quite a lot information exists about the behaviour of some hormones in adults. However, the hormonal profiles in pediatric metabolism have not been cleared yet. The aim of this study is to investigate the profiles of cortisol, insulin, and thyroid hormones in children with MetS. The study population was composed of morbid obese (MO) children without (Group 1) and with (Group 2) MetS components. WHO BMI-for age and sex percentiles were used for the classification of obesity. The values above 99 percentile were defined as morbid obesity. Components of MetS (central obesity, glucose intolerance, high blood pressure, high triacylglycerol levels, low levels of high density lipoprotein cholesterol) were determined. Anthropometric measurements were performed. Ratios as well as obesity indices were calculated. Insulin, cortisol, thyroid stimulating hormone (TSH), free T3 and free T4 analyses were performed by electrochemiluminescence immunoassay. Data were evaluated by statistical package for social sciences program. p<0.05 was accepted as the degree for statistical significance. The mean ages±SD values of Group 1 and Group 2 were 9.9±3.1 years and 10.8±3.2 years, respectively. Body mass index (BMI) values were calculated as 27.4±5.9 kg/m2 and 30.6±8.1 kg/m2, successively. There were no statistically significant differences between the ages and BMI values of the groups. Insulin levels were statistically significantly increased in MetS in comparison with the levels measured in MO children. There was not any difference between MO children and those with MetS in terms of cortisol, T3, T4 and TSH. However, T4 levels were positively correlated with cortisol and negatively correlated with insulin. None of these correlations were observed in MO children. Cortisol levels in both MO as well as MetS group were significantly correlated. Cortisol, insulin, and thyroid hormones are essential for life. Cortisol, called the control system for hormones, orchestrates the performance of other key hormones. It seems to establish a connection between hormone imbalance and inflammation. During an inflammatory state, more cortisol is produced to fight inflammation. High cortisol levels prevent the conversion of the inactive form of the thyroid hormone T4 into active form T3. Insulin is reduced due to low thyroid hormone. T3, which is essential for blood sugar control- requires cortisol levels within the normal range. Positive association of T4 with cortisol and negative association of it with insulin are the indicators of such a delicate balance among these hormones also in children with MetS.Keywords: children, cortisol, insulin, metabolic syndrome, thyroid hormones
Procedia PDF Downloads 150150 Evaluation of Airborne Particulate Matter Early Biological Effects in Children with Micronucleus Cytome Assay: The MAPEC_LIFE Project
Authors: E. Carraro, Sa. Bonetta, Si. Bonetta, E. Ceretti, G. C. V. Viola, C. Pignata, S. Levorato, T. Salvatori, S. Vannini, V. Romanazzi, A. Carducci, G. Donzelli, T. Schilirò, A. De Donno, T. Grassi, S. Bonizzoni, A. Bonetti, G. Gilli, U. Gelatti
Abstract:
In 2013, air pollution and particulate matter were classified as carcinogenic to human by the IARC. At present, PM is Europe's most problematic pollutant in terms of harm to health, as reported by European Environmental Agency (EEA) in the EEA Technical Report on Air quality in Europe, 2015. A percentage between 17-30 of the EU urban population lives in areas where the EU air quality 24-hour limit value for PM10 is exceeded. Many studies have found a consistent association between exposure to PM and the incidence and mortality for some chronic diseases (i.e. lung cancer, cardiovascular diseases). Among the mechanisms responsible for these adverse effects, genotoxic damage is of particular concern. Children are a high-risk group in terms of the health effects of air pollution and early exposure during childhood can increase the risk of developing chronic diseases in adulthood. The MAPEC_LIFE (Monitoring Air Pollution Effects on Children for supporting public health policy) is a project founded by EU Life+ Programme (LIFE12 ENV/IT/000614) which intends to evaluate the associations between air pollution and early biological effects in children and to propose a model for estimating the global risk of early biological effects due to air pollutants and other factors in children. This work is focused on the micronuclei frequency in child buccal cells in association with airborne PM levels taking into account the influence of other factors associated with the lifestyle of children. The micronucleus test was performed in exfoliated buccal cells of 6–8 years old children from 5 Italian towns with different air pollution levels. Data on air quality during the study period were obtained from the Regional Agency for Environmental Protection. A questionnaire administered to children’s parents was used to obtain details on family socio-economic status, children health condition, exposures to other indoor and outdoor pollutants (i.e. passive smoke) and life-style, with particular reference to eating habits. During the first sampling campaign (winter 2014-15) 1315 children were recruited and sampled for Micronuclei test in buccal cells. In the sampling period the levels of the main pollutants and PM10 were, as expected, higher in the North of Italy (PM10 mean values 62 μg/m3 in Torino and 40 μg/m3 in Brescia) than in the other towns (Pisa, Perugia, Lecce). A higher Micronucleus frequency in buccal cells of children was found in Brescia (0.6/1000 cells) than in the other towns (range 0.3-0.5/1000 cells). The statistical analysis underlines a relation of the micronuclei frequency with PM concentrations, traffic level near child residence, and level of education of parents. The results suggest that, in addition to air pollution exposure, some other factors, related to lifestyle or further exposures, may influence micronucleus frequency and cellular response to air pollutants.Keywords: air pollution, buccal cells, children, micronucleus cytome assay
Procedia PDF Downloads 256149 Profitability and Productivity Performance of the Selected Public Sector Banks in India
Authors: Sudipto Jana
Abstract:
Background and significance of the study: Banking industry performs as a catalyst for industrial growth and agricultural growth, however, as well involves the existence and welfare of the citizens. The banking system in India was described by unmatched growth and the recreation of bunch making in the pre-liberalization era. At the time of financial sector reforms Reserve Bank of India issued a regulatory norm concerning capital adequacy, income recognition, asset classification and provisioning that have increasingly precede meeting by means of the international paramount performs. Bank management ceaselessly manages the triumph, effectiveness, productivity and performance of the bank as good performance, high productivity and efficiency authorizes the triumph of the bank management targets as well as aims of bank. In a comparable move toward performance of any economy depends upon the expediency and effectiveness of its financial system of nation establishes its economic growth indicators. Profitability and productivity are the most important relevant parameters of any banking group. Keeping in view of this, this study examines the profitability and productivity performance of the selected public sector banks in India. Methodology: This study is based on secondary data obtained from Reserve Bank of India database for the periods between 2006 and 2015. This study purposively selects four types of commercial banks, namely, State Bank of India, United Bank of India, Punjab National Bank and Allahabad Bank. In order to analyze the performance with relation to profitability and productivity, productivity performance indicators in terms of capital adequacy ratio, burden ratio, business per employee, spread per employee and advances per employee and profitability performance indicators in terms of return on assets, return on equity, return on advances and return on branch have been considered. In the course of analysis, descriptive statistics, correlation statistics and multiple regression have been used. Major findings: Descriptive statistics indicate that productivity performance of State Bank of India is very satisfactory than other public sector banks in India. But management of productivity is unsatisfactory in case of all the public sector banks under study. Correlation statistics point out that profitability of the public sector banks are strongly positively related with productivity performance in case of all the public sector banks under study. Multiple regression test results show that when profitability increases profit per employee increases and net non-performing assets decreases. Concluding statements: Productivity and profitability performance of United Bank of India, Allahabad Bank and Punjab National Bank are unsatisfactory due to poor management of asset quality as well as management efficiency. It needs government’s interference so that profitability and productivity performance are increased in the near future.Keywords: India, productivity, profitability, public sector banks
Procedia PDF Downloads 433148 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes
Authors: Mohsen Hababalahi, Morteza Bastami
Abstract:
Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method
Procedia PDF Downloads 513147 Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective
Authors: Jung-Hong Hong, Jing-Cen Yang, Cai-Yu Ou
Abstract:
The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.Keywords: mortality map, spatial patterns, statistical area, variation
Procedia PDF Downloads 260146 Maternal, Delivery and Neonatal Outcomes in Women with Cervical Cancer. A Study of a Population Database
Authors: Aaron Samuels, Ahmad Badeghiesh, Haitham Baghlaf, Michael H. Dahan
Abstract:
Importance: Cervical cancer is the fourth most common cancer among women globally and a significant cause of cancer-related deaths. Understanding the impact of cervical cancer diagnosed during pregnancy on maternal, delivery, and neonatal outcomes is crucial for improving clinical management and outcomes for affected women and their children. Objective: The goal is to determine the effects of cervical cancer diagnosed during pregnancy on maternal, delivery, and neonatal outcomes using a population-based American database. Design: This study is a retrospective analysis of the Healthcare Cost and Utilization Project Nationwide Inpatient Sample (HCUP-NIS) database. The study period spans between 2004-2014, and the analysis was conducted in 2023. Setting: The study used the HCUP-NIS database, which includes data from hospital stays across the United States, covering 48 states and the District of Columbia. Participants: The study included all women who delivered a child or had a maternal death from 2004-2014, with pregnancies at 24 weeks or above. The population was comprised of 9,096,788 pregnant women, including 222 diagnosed with cervical cancer prior to delivery. Exposures: The exposure was a diagnosis of cervical cancer during pregnancy, identified using International Classification of Diseases 9th Revision codes 180.0, 180.1, 180.8, and 180.9. Main Outcomes and Measures: Primary outcomes included maternal, delivery, and neonatal complications including preterm delivery, cesarean section, hysterectomy, blood transfusion, deep venous thrombosis, pulmonary embolism, congenital anomalies, intrauterine fetal demise, and small-for-gestational-age neonates. Logistic regression analyses were conducted to evaluate the association between cervical cancer diagnosis and these outcomes, adjusting for potential confounding factors. Results: Women with cervical cancer were older (25.2% ≥35 years vs. 14.7%, p=0.001, respectively); more likely to have Medicare insurance (1.4% vs. 0.6%, p=0.005, respectively); use illicit drugs (4.1% vs. 1.4%, p=0.001, respectively); smoke tobacco during pregnancy (14.9% vs. 4.9%, p=0.001, respectively); and have chronic hypertension (3.6% vs. 1.8%, p=0.046, respectively). These women also had higher rates of preterm delivery (OR = 4.73, 95% CI (3.53-6.36), p=0.001); cesarean section (OR = 5.40, 95% CI (4.00-7.30), p=0.001); hysterectomy (OR = 390.23, 95% CI (286.43-531.65), p=0.001); blood transfusions (OR = 19.23, 95% CI (13.57-27.25), p=0.001); deep venous thrombosis (OR = 9.42, 95% CI (1.32-67.20), p=0.025); and pulmonary embolism (OR = 20.22, 95% CI (2.83-144.48), p=0.003). Neonatal outcomes, including congenital anomalies, intrauterine fetal demise, and small-for-gestational-age neonates, were comparable between groups. Conclusions and Relevance: Cervical cancer during pregnancy is associated with significant maternal and delivery risks; however, neonatal outcomes are largely unaffected. These findings highlight the need for a multidisciplinary approach to managing pregnant cervical cancer patients involving oncological, obstetrical, and neonatal care specialists.Keywords: cervical cancer, maternal outcomes, neonatal outcomes, delivery outcomes
Procedia PDF Downloads 15145 Finding the Association Rule between Nursing Interventions and Early Evaluation Results of In-Hospital Cardiac Arrest to Improve Patient Safety
Authors: Wei-Chih Huang, Pei-Lung Chung, Ching-Heng Lin, Hsuan-Chia Yang, Der-Ming Liou
Abstract:
Background: In-Hospital Cardiac Arrest (IHCA) threaten life of the inpatients, cause serious effect to patient safety, quality of inpatients care and hospital service. Health providers must identify the signs of IHCA early to avoid the occurrence of IHCA. This study will consider the potential association between early signs of IHCA and the essence of patient care provided by nurses and other professionals before an IHCA occurs. The aim of this study is to identify significant associations between nursing interventions and abnormal early evaluation results of IHCA that can assist health care providers in monitoring inpatients at risk of IHCA to increase opportunities of IHCA early detection and prevention. Materials and Methods: This study used one of the data mining techniques called association rules mining to compute associations between nursing interventions and abnormal early evaluation results of IHCA. The nursing interventions and abnormal early evaluation results of IHCA were considered to be co-occurring if nursing interventions were provided within 24 hours of last being observed in abnormal early evaluation results of IHCA. The rule based methods were utilized 23.6 million electronic medical records (EMR) from a medical center in Taipei, Taiwan. This dataset includes 733 concepts of nursing interventions that coded by clinical care classification (CCC) codes and 13 early evaluation results of IHCA with binary codes. The values of interestingness and lift were computed as Q values to measure the co-occurrence and associations’ strength between all in-hospital patient care measures and abnormal early evaluation results of IHCA. The associations were evaluated by comparing the results of Q values and verified by medical experts. Results and Conclusions: The results show that there are 4195 pairs of associations between nursing interventions and abnormal early evaluation results of IHCA with their Q values. The indication of positive association is 203 pairs with Q values greater than 5. Inpatients with high blood sugar level (hyperglycemia) have positive association with having heart rate lower than 50 beats per minute or higher than 120 beats per minute, Q value is 6.636. Inpatients with temporary pacemaker (TPM) have significant association with high risk of IHCA, Q value is 47.403. There is significant positive correlation between inpatients with hypovolemia and happened abnormal heart rhythms (arrhythmias), Q value is 127.49. The results of this study can help to prevent IHCA from occurring by making health care providers early recognition of inpatients at risk of IHCA, assist with monitoring patients for providing quality of care to patients, improve IHCA surveillance and quality of in-hospital care.Keywords: in-hospital cardiac arrest, patient safety, nursing intervention, association rule mining
Procedia PDF Downloads 272144 Incidences and Factors Associated with Perioperative Cardiac Arrest in Trauma Patient Receiving Anesthesia
Authors: Visith Siriphuwanun, Yodying Punjasawadwong, Suwinai Saengyo, Kittipan Rerkasem
Abstract:
Objective: To determine incidences and factors associated with perioperative cardiac arrest in trauma patients who received anesthesia for emergency surgery. Design and setting: Retrospective cohort study in trauma patients during anesthesia for emergency surgery at a university hospital in northern Thailand country. Patients and methods: This study was permitted by the medical ethical committee, Faculty of Medicine at Maharaj Nakorn Chiang Mai Hospital, Thailand. We clarified data of 19,683 trauma patients receiving anesthesia within a decade between January 2007 to March 2016. The data analyzed patient characteristics, traumas surgery procedures, anesthesia information such as ASA physical status classification, anesthesia techniques, anesthetic drugs, location of anesthesia performed, and cardiac arrest outcomes. This study excluded the data of trauma patients who had received local anesthesia by surgeons or monitoring anesthesia care (MAC) and the patient which missing more information. The factor associated with perioperative cardiac arrest was identified with univariate analyses. Multiple regressions model for risk ratio (RR) and 95% confidence intervals (CI) were used to conduct factors correlated with perioperative cardiac arrest. The multicollinearity of all variables was examined by bivariate correlation matrix. A stepwise algorithm was chosen at a p-value less than 0.02 was selected to further multivariate analysis. A P-value of less than 0.05 was concluded as statistically significant. Measurements and results: The occurrence of perioperative cardiac arrest in trauma patients receiving anesthesia for emergency surgery was 170.04 per 10,000 cases. Factors associated with perioperative cardiac arrest in trauma patients were age being more than 65 years (RR=1.41, CI=1.02–1.96, p=0.039), ASA physical status 3 or higher (RR=4.19–21.58, p < 0.001), sites of surgery (intracranial, intrathoracic, upper intra-abdominal, and major vascular, each p < 0.001), cardiopulmonary comorbidities (RR=1.55, CI=1.10–2.17, p < 0.012), hemodynamic instability with shock prior to receiving anesthesia (RR=1.60, CI=1.21–2.11, p < 0.001) , special techniques for surgery such as cardiopulmonary bypass (CPB) and hypotensive techniques (RR=5.55, CI=2.01–15.36, p=0.001; RR=6.24, CI=2.21–17.58, p=0.001, respectively), and patients who had a history of being alcoholic (RR=5.27, CI=4.09–6.79, p < 0.001). Conclusion: Incidence of perioperative cardiac arrest in trauma patients receiving anesthesia for emergency surgery was very high and correlated with many factors, especially age of patient and cardiopulmonary comorbidities, patient having a history of alcoholic addiction, increasing ASA physical status, preoperative shock, special techniques for surgery, and sites of surgery including brain, thorax, abdomen, and major vascular region. Anesthesiologists and multidisciplinary teams in pre- and perioperative periods should remain alert for warning signs of pre-cardiac arrest and be quick to manage the high-risk group of surgical trauma patients. Furthermore, a healthcare policy should be promoted for protecting against accidents in high-risk groups of the population as well.Keywords: perioperative cardiac arrest, trauma patients, emergency surgery, anesthesia, factors risk, incidence
Procedia PDF Downloads 170143 A Study of Lapohan Traditional Pottery Making in Selakan Island, Semporna Sabah: An Initial Framework
Authors: Norhayati Ayob, Shamsu Mohamad
Abstract:
This paper aims to provide an initial background of the process of making traditional ceramic pottery, focusing on the materials and the influence of culture heritage. Ceramic pottery is one of the hallmarks of Sabah’s heirloom, not only use as cooking and storage containers but also closely linked with folk cultures and heritage. The Bajau Laut ethnic community of Semporna or better known as the Sea Gypsies, mostly are boat dwellers and work as fishermen in the coast. This ethnic community is famous for their own artistic traditional heirloom, especially the traditional hand-made clay stove called Lapohan. It is found that in the daily life of Bajau Laut community, Lapohan (clay stove) is used to prepare the meal and as a food warmer while they are at the sea. Besides, Lapohan pottery conveys symbolic meaning of natural objects, which portrays the identity, and values of Bajau Laut community. It is acknowledged that the basic process of making potterywares was much the same for people all across the world, nevertheless, it is crucial to consider that different ethnic groups may have their own styles and choices of raw materials. Furthermore, it is still unknown why and how the Bajau Laut ethnic of Semporna get started making their own pottery and to survive until today by heavily depending on the raw materials available in Semporna. In addition, the emergent problem faced by the pottery maker in Sabah is the absence of young successor to continue the heirloom legacy. Therefore, this research aims to explore the traditional pottery making in Sabah, by investigating the background history of Lapohan pottery and to propose the classification of Lapohan based on design and motifs of traditional pottery that will be recognised throughout the study. It is postulated that different techniques and forms of making traditional pottery may produce different types of pottery in terms of surface decoration, shape, and size that portrays different cultures. This study will be conducted at Selakan Island, Semporna, which is the only location that still has Lapohan making. This study is also based on the chronological process of making pottery and taboos of the process of preparing the clay, forming, decoration technique, motif application and firing techniques. The relevant information for the study will be gathered from field study, including observation, in-depth interview and video recording. In-depth interviews will be conducted with several potters and the conversation and pottery making process will be recorded in order to understand the actual process of making Lapohan. The findings hope to provide several types of Lapohan based on different designs and cultures, for example, the one with flat-shape design or has round-shape on the top of clay stove will be labeled with suitable name based on their culture. In conclusion, it is hoped that this study will contribute to conservation for traditional pottery making in Sabah as well as to preserve their culture and heirloom for future generations.Keywords: Bajau Laut, culture, Lapohan, traditional pottery
Procedia PDF Downloads 191142 Spatial Heterogeneity of Urban Land Use in the Yangtze River Economic Belt Based on DMSP/OLS Data
Authors: Liang Zhou, Qinke Sun
Abstract:
Taking the Yangtze River Economic Belt as an example, using long-term nighttime lighting data from DMSP/OLS from 1992 to 2012, support vector machine classification (SVM) was used to quantitatively extract urban built-up areas of economic belts, and spatial analysis of expansion intensity index, standard deviation ellipse, etc. was introduced. The model conducts detailed and in-depth discussions on the strength, direction, and type of the expansion of the middle and lower reaches of the economic belt and the key node cities. The results show that: (1) From 1992 to 2012, the built-up areas of the major cities in the Yangtze River Valley showed a rapid expansion trend. The built-up area expanded by 60,392 km², and the average annual expansion rate was 31%, that is, from 9615 km² in 1992 to 70007 km² in 2012. The spatial gradient analysis of the watershed shows that the expansion of urban built-up areas in the middle and lower reaches of the river basin takes Shanghai as the leading force, and the 'bottom-up' model shows an expanding pattern of 'upstream-downstream-middle-range' declines. The average annual rate of expansion is 36% and 35%, respectively. 17% of which the midstream expansion rate is about 50% of the upstream and downstream. (2) The analysis of expansion intensity shows that the urban expansion intensity in the Yangtze River Basin has generally shown an upward trend, the downstream region has continued to rise, and the upper and middle reaches have experienced different amplitude fluctuations. To further analyze the strength of urban expansion at key nodes, Chengdu, Chongqing, and Wuhan in the upper and middle reaches maintain a high degree of consistency with the intensity of regional expansion. Node cities with Shanghai as the core downstream continue to maintain a high level of expansion. (3) The standard deviation ellipse analysis shows that the overall center of gravity of the Yangtze River basin city is located in Anqing City, Anhui Province, and it showed a phenomenon of reciprocating movement from 1992 to 2012. The nighttime standard deviation ellipse distribution range increased from 61.96 km² to 76.52 km². The growth of the major axis of the ellipse was significantly larger than that of the minor axis. It had obvious east-west axiality, in which the nighttime lights in the downstream area occupied in the entire luminosity scale urban system leading position.Keywords: urban space, support vector machine, spatial characteristics, night lights, Yangtze River Economic Belt
Procedia PDF Downloads 115141 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City
Authors: Berhanu Keno Terfa
Abstract:
Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.
Procedia PDF Downloads 38140 The Effects of a Hippotherapy Simulator in Children with Cerebral Palsy: A Pilot Study
Authors: Canan Gunay Yazici, Zubeyir Sarı, Devrim Tarakci
Abstract:
Background: Hippotherapy considered as global techniques used in rehabilitation of children with cerebral palsy as it improved gait pattern, balance, postural control, balance and gross motor skills development but it encounters some problems (such as the excess of the cost of horses' care, nutrition, housing). Hippotherapy simulator is being developed in recent years to overcome these problems. These devices aim to create the effects of hippotherapy made with a real horse on patients by simulating the movements of a real horse. Objectives: To evaluate the efficacy of hippotherapy simulator on gross motor functions, sitting postural control and dynamic balance of children with cerebral palsy (CP). Methods: Fourteen children with CP, aged 6–15 years, seven with a diagnosis of spastic hemiplegia, five of diplegia, two of triplegia, Gross Motor Function Classification System level I-III. The Horse Riding Simulator (HRS), including four-speed program (warm-up, level 1-2-3), was used for hippotherapy simulator. Firstly, each child received Neurodevelopmental Therapy (NDT; 45min twice weekly eight weeks). Subsequently, the same children completed HRS+NDT (30min and 15min respectively, twice weekly eight weeks). Children were assessed pre-treatment, at the end of 8th and 16th week. Gross motor function, sitting postural control, dynamic sitting and standing balance were evaluated by Gross Motor Function Measure-88 (GMFM-88, Dimension B, D, E and Total Score), Trunk Impairment Scale (TIS), Pedalo® Sensamove Balance Test and Pediatric Balance Scale (PBS) respectively. Unit of Scientific Research Project of Marmara University supported our study. Results: All measured variables were a significant increase compared to baseline values after both intervention (NDT and HRS+NDT), except for dynamic sitting balance evaluated by Pedalo®. Especially HRS+NDT, increase in the measured variables was considerably higher than NDT. After NDT, the Total scores of GMFM-88 (mean baseline 62,2 ± 23,5; mean NDT: 66,6 ± 22,2; p < 0,05), TIS (10,4 ± 3,4; 12,1 ± 3; p < 0,05), PBS (37,4 ± 14,6; 39,6 ± 12,9; p < 0,05), Pedalo® sitting (91,2 ± 6,7; 92,3 ± 5,2; p > 0,05) and Pedalo® standing balance points (80,2 ± 10,8; 82,5 ± 11,5; p < 0,05) increased by 7,1%, 2%, 3,9%, 5,2% and 6 % respectively. After HRS+NDT treatment, the total scores of GMFM-88 (mean baseline: 62,2 ± 23,5; mean HRS+NDT: 71,6 ± 21,4; p < 0,05), TIS (10,4 ± 3,4; 15,6 ± 2,9; p < 0,05), PBS (37,4 ± 14,6; 42,5 ± 12; p < 0,05), Pedalo® sitting (91,2 ± 6,7; 93,8 ± 3,7; p > 0,05) and standing balance points (80,2 ± 10,8; 86,2 ± 5,6; p < 0,05) increased by 15,2%, 6%, 7,3%, 6,4%, and 11,9%, respectively, compared to the initial values. Conclusion: Neurodevelopmental therapy provided significant improvements in gross motor functions, sitting postural control, sitting and standing balance of children with CP. When the hippotherapy simulator added to the treatment program, it was observed that these functions were further developed (especially with gross motor functions and dynamic balance). As a result, this pilot study showed that the hippotherapy simulator could be a useful alternative to neurodevelopmental therapy for the improvement of gross motor function, sitting postural control and dynamic balance of children with CP.Keywords: balance, cerebral palsy, hippotherapy, rehabilitation
Procedia PDF Downloads 146139 Repair of Thermoplastic Composites for Structural Applications
Authors: Philippe Castaing, Thomas Jollivet
Abstract:
As a result of their advantages, i.e. recyclability, weld-ability, environmental compatibility, long (continuous) fiber thermoplastic composites (LFTPC) are increasingly used in many industrial sectors (mainly automotive and aeronautic) for structural applications. Indeed, in the next ten years, the environmental rules will put the pressure on the use of new structural materials like composites. In aerospace, more than 50% of the damage are due to stress impact and 85% of damage are repaired on the fuselage (fuselage skin panels and around doors). With the arrival of airplanes mainly of composite materials, replacement of sections or panels seems difficult economically speaking and repair becomes essential. The objective of the present study is to propose a solution of repair to prevent the replacement the damaged part in thermoplastic composites in order to recover the initial mechanical properties. The classification of impact damage is not so not easy : talking about low energy impact (less than 35 J) can be totally wrong when high speed or weak thicknesses as well as thermoplastic resins are considered. Crash and perforation with higher energy create important damages and the structures are replaced without repairing, so we just consider here damages due to impacts at low energy that are as follows for laminates : − Transverse cracking; − Delamination; − Fiber rupture. At low energy, the damages are barely visible but can nevertheless reduce significantly the mechanical strength of the part due to resin cracks while few fiber rupture is observed. The patch repair solution remains the standard one but may lead to the rupture of fibers and consequently creates more damages. That is the reason why we investigate the repair of thermoplastic composites impacted at low energy. Indeed, thermoplastic resins are interesting as they absorb impact energy through plastic strain. The methodology is as follows: - impact tests at low energy on thermoplastic composites; - identification of the damage by micrographic observations; - evaluation of the harmfulness of the damage; - repair by reconsolidation according to the extent of the damage ; -validation of the repair by mechanical characterization (compression). In this study, the impacts tests are performed at various levels of energy on thermoplastic composites (PA/C, PEEK/C and PPS/C woven 50/50 and unidirectional) to determine the level of impact energy creating damages in the resin without fiber rupture. We identify the extent of the damage by US inspection and micrographic observations in the plane part thickness. The samples were in addition characterized in compression to evaluate the loss of mechanical properties. Then the strategy of repair consists in reconsolidating the damaged parts by thermoforming, and after reconsolidation the laminates are characterized in compression for validation. To conclude, the study demonstrates the feasibility of the repair for low energy impact on thermoplastic composites as the samples recover their properties. At a first step of the study, the “repair” is made by reconsolidation on a thermoforming press but we could imagine a process in situ to reconsolidate the damaged parts.Keywords: aerospace, automotive, composites, compression, damages, repair, structural applications, thermoplastic
Procedia PDF Downloads 305138 Rethinking Urban Voids: An Investigation beneath the Kathipara Flyover, Chennai into a Transit Hub by Adaptive Utilization of Space
Authors: V. Jayanthi
Abstract:
Urbanization and pace of urbanization have increased tremendously in last few decades. More towns are now getting converted into cities. Urbanization trend is seen all over the world but is becoming most dominant in Asia. Today, the scale of urbanization in India is so huge that Indian cities are among the fastest-growing in the world, including Bangalore, Hyderabad, Pune, Chennai, Delhi, and Mumbai. Urbanization remains a single predominant factor that is continuously linked to the destruction of urban green spaces. With reference to Chennai as a case study, which is suffering from rapid deterioration of its green spaces, this paper sought to fill this gap by exploring key factors aside urbanization that is responsible for the destruction of green spaces. The paper relied on a research approach and triangulated data collection techniques such as interviews, focus group discussion, personal observation and retrieval of archival data. It was observed that apart from urbanization, problem of ownership of green space lands, low priority to green spaces, poor maintenance, enforcement of development controls, wastage of underpass spaces, and uncooperative attitudes of the general public, play a critical role in the destruction of urban green spaces. Therefore the paper narrows down to a point, that for a city to have a proper sustainable urban green space, broader city development plans are essential. Though rapid urbanization is an indicator of positive development, it is also accompanied by a host of challenges. Chennai lost a lot of greenery, as the city urbanized rapidly that led to a steep fall in vegetation cover. Environmental deterioration will be the big price we pay if Chennai continues to grow at the expense of greenery. Soaring skyscrapers, multistoried complexes, gated communities, and villas, frame the iconic skyline of today’s Chennai city which reveals that we overlook the importance of our green cover, which is important to balance our urban and lung spaces. Chennai, with a clumped landscape at the center of the city, is predicted to convert 36% of its total area into urban areas by 2026. One major issue is that a city designed and planned in isolation creates underused spaces all around the cities which are of negligence. These urban voids are dead, underused, unused spaces in the cities that are formed due to inefficient decision making, poor land management, and poor coordination. Urban voids have huge potential of creating a stronger urban fabric, exploited as public gathering spaces, pocket parks or plazas or just enhance public realm, rather than dumping of debris and encroachments. Flyovers need to justify their existence themselves by being more than just traffic and transport solutions. The vast, unused space below the Kathipara flyover is a case in point. This flyover connects three major routes: Tambaram, Koyambedu, and Adyar. This research will focus on the concept of urban voids, how these voids under the flyovers, can be used for place making process, how this space beneath flyovers which are neglected, can be a part of the urban realm through urban design and landscaping.Keywords: landscape design, flyovers, public spaces, reclaiming lost spaces, urban voids
Procedia PDF Downloads 287137 Mobile App versus Website: A Comparative Eye-Tracking Case Study of Topshop
Authors: Zofija Tupikovskaja-Omovie, David Tyler, Sam Dhanapala, Steve Hayes
Abstract:
The UK is leading in online retail and mobile adoption. However, there is a dearth of information relating to mobile apparel retail, and developing an understanding about consumer browsing and purchase behavior in m-retail channel would provide apparel marketers, mobile website and app developers with the necessary understanding of consumers’ needs. Despite the rapid growth of mobile retail businesses, no published study has examined shopping behaviour on fashion mobile websites and apps. A mixed method approach helped to understand why fashion consumers prefer websites on mobile devices, when mobile apps are also available. The following research methods were employed: survey, eye-tracking experiments, observation, and interview with retrospective think aloud. The mobile gaze tracking device by SensoMotoric Instruments was used to understand frustrations in navigation and other issues facing consumers in mobile channel. This method helped to validate and compliment other traditional user-testing approaches in order to optimize user experience and enhance the development of mobile retail channel. The study involved eight participants - females aged 18 to 35 years old, who are existing mobile shoppers. The participants used the Topshop mobile app and website on a smart phone to complete a task according to a specified scenario leading to a purchase. The comparative study was based on: duration and time spent at different stages of the shopping journey, number of steps involved and product pages visited, search approaches used, layout and visual clues, as well as consumer perceptions and expectations. The results from the data analysis show significant differences in consumer behaviour when using a mobile app or website on a smart phone. Moreover, two types of problems were identified, namely technical issues and human errors. Having a mobile app does not guarantee success in satisfying mobile fashion consumers. The differences in the layout and visual clues seem to influence the overall shopping experience on a smart phone. The layout of search results on the website was different from the mobile app. Therefore, participants, in most cases, behaved differently on different platforms. The number of product pages visited on the mobile app was triple the number visited on the website due to a limited visibility of products in the search results. Although, the data on traffic trends held by retailers to date, including retail sector breakdowns for visits and views, data on device splits and duration, might seem a valuable source of information, it cannot explain why consumers visit many product pages, stay longer on the website or mobile app, or abandon the basket. A comprehensive list of pros and cons was developed by highlighting issues for website and mobile app, and recommendations provided. The findings suggest that fashion retailers need to be aware of actual consumers’ behaviour on the mobile channel and their expectations in order to offer a seamless shopping experience. Added to which is the challenge of retaining existing and acquiring new customers. There seem to be differences in the way fashion consumers search and shop on mobile, which need to be explored in further studies.Keywords: consumer behavior, eye-tracking technology, fashion retail, mobile app, m-retail, smart phones, topshop, user experience, website
Procedia PDF Downloads 460136 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments
Authors: Skyler Kim
Abstract:
An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning
Procedia PDF Downloads 188135 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 220