Search results for: computer aided navigation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2804

Search results for: computer aided navigation

1784 Placement of English Lexical Stress by Arabic-Speaking EFL Learners: How Computer-Generated Spectrographic Representations of Correct Pronunciations Can Provide a Visual Aid to Learners

Authors: Rami Al-Sadi

Abstract:

The assignment of lexical stress in English to its correct syllable is an enormous challenge to EFL learners, especially if their first language (L1) phonology is very different from English phonology. Arabic-speaking EFL learners not only stumble very frequently when it comes to placing the lexical stress in a given word, but they also seem to relegate the role of lexical stress as unimportant, mainly because in Arabic, unlike in English, lexical stress is not phonemic. This study aims at exploring the possible benefits of utilizing spectrographic representations of English words correctly pronounced, for the purpose of finding out how these spectrograms can provide a visual aid to the learners and help them rectify their stress placement errors as they see in real time spectrograms of the correct pronunciations juxtaposed on a computer screen with spectrograms of their own pronunciations for easy comparison. The study involved 120 students from the English Department at Prince Sattam bin Abdulaziz University in Saudi Arabia. 60 participants were taught the English lexical stress rules and also received spectrographic guidance on pronunciation; the other 60 received only verbal instruction on the stress rules and verbal feedback on their pronunciations. Statistical results showed that when the learners had the opportunity to ‘see’ their pronunciation mistakes, they were three times more likely to rectify their placement of lexical stress.

Keywords: Arabic-speaking EFL learners, lexical stress, pronunciation, spectrographic representation, stress placement

Procedia PDF Downloads 123
1783 Paddy/Rice Singulation for Determination of Husking Efficiency and Damage Using Machine Vision

Authors: M. Shaker, S. Minaei, M. H. Khoshtaghaza, A. Banakar, A. Jafari

Abstract:

In this study a system of machine vision and singulation was developed to separate paddy from rice and determine paddy husking and rice breakage percentages. The machine vision system consists of three main components including an imaging chamber, a digital camera, a computer equipped with image processing software. The singulation device consists of a kernel holding surface, a motor with vacuum fan, and a dimmer. For separation of paddy from rice (in the image), it was necessary to set a threshold. Therefore, some images of paddy and rice were sampled and the RGB values of the images were extracted using MATLAB software. Then mean and standard deviation of the data were determined. An Image processing algorithm was developed using MATLAB to determine paddy/rice separation and rice breakage and paddy husking percentages, using blue to red ratio. Tests showed that, a threshold of 0.75 is suitable for separating paddy from rice kernels. Results from the evaluation of the image processing algorithm showed that the accuracies obtained with the algorithm were 98.36% and 91.81% for paddy husking and rice breakage percentage, respectively. Analysis also showed that a suction of 45 mmHg to 50 mmHg yielding 81.3% separation efficiency is appropriate for operation of the kernel singulation system.

Keywords: breakage, computer vision, husking, rice kernel

Procedia PDF Downloads 381
1782 Exploring Data Leakage in EEG Based Brain-Computer Interfaces: Overfitting Challenges

Authors: Khalida Douibi, Rodrigo Balp, Solène Le Bars

Abstract:

In the medical field, applications related to human experiments are frequently linked to reduced samples size, which makes the training of machine learning models quite sensitive and therefore not very robust nor generalizable. This is notably the case in Brain-Computer Interface (BCI) studies, where the sample size rarely exceeds 20 subjects or a few number of trials. To address this problem, several resampling approaches are often used during the data preparation phase, which is an overly critical step in a data science analysis process. One of the naive approaches that is usually applied by data scientists consists in the transformation of the entire database before the resampling phase. However, this can cause model’ s performance to be incorrectly estimated when making predictions on unseen data. In this paper, we explored the effect of data leakage observed during our BCI experiments for device control through the real-time classification of SSVEPs (Steady State Visually Evoked Potentials). We also studied potential ways to ensure optimal validation of the classifiers during the calibration phase to avoid overfitting. The results show that the scaling step is crucial for some algorithms, and it should be applied after the resampling phase to avoid data leackage and improve results.

Keywords: data leackage, data science, machine learning, SSVEP, BCI, overfitting

Procedia PDF Downloads 153
1781 The Ethics of Documentary Filmmaking Discuss the Ethical Considerations and Responsibilities of Documentary Filmmakers When Portraying Real-life Events and Subjects

Authors: Batatunde Kolawole

Abstract:

Documentary filmmaking stands as a distinctive medium within the cinematic realm, commanding a unique responsibility the portrayal of real-life events and subjects. This research delves into the profound ethical considerations and responsibilities that documentary filmmakers shoulder as they embark on the quest to unveil truth and weave compelling narratives. In the exploration, they embark on a comprehensive review of ethical frameworks and real-world case studies, illuminating the intricate web of challenges that documentarians confront. These challenges encompass an array of ethical intricacies, from securing informed consent to safeguarding privacy, maintaining unwavering objectivity, and sidestepping the snares of narrative manipulation when crafting stories from reality. Furthermore, they dissect the contemporary ethical terrain, acknowledging the emergence of novel dilemmas in the digital age, such as deepfakes and digital alterations. Through a meticulous analysis of ethical quandaries faced by distinguished documentary filmmakers and their strategies for ethical navigation, this study offers invaluable insights into the evolving role of documentaries in molding public discourse. They underscore the indispensable significance of transparency, integrity, and an indomitable commitment to encapsulating the intricacies of reality within the realm of ethical documentary filmmaking. In a world increasingly reliant on visual narratives, an understanding of the subtle ethical dimensions of documentary filmmaking holds relevance not only for those behind the camera but also for the diverse audiences who engage with and interpret the realities unveiled on screen. This research stands as a rigorous examination of the moral compass that steers this potent form of cinematic expression. It emphasizes the capacity of ethical documentary filmmaking to enlighten, challenge, and inspire, all while unwaveringly upholding the core principles of truthfulness and respect for the human subjects under scrutiny. Through this holistic analysis, they illuminate the enduring significance of upholding ethical integrity while uncovering the truths that shape our world. Ethical documentary filmmaking, as exemplified by "Rape" and countless other powerful narratives, serves as a testament to the enduring potential of cinema to inform, challenge, and drive meaningful societal discourse.

Keywords: filmmaking, documentary, human right, film

Procedia PDF Downloads 66
1780 Investigating the Effect of Metaphor Awareness-Raising Approach on the Right-Hemisphere Involvement in Developing Japanese Learners’ Knowledge of Different Degrees of Politeness

Authors: Masahiro Takimoto

Abstract:

The present study explored how the metaphor awareness-raising approach affects the involvement of the right hemisphere in developing EFL learners’ knowledge regarding the different degrees of politeness embedded within different request expressions. The present study was motivated by theoretical considerations regarding the conceptual projection and the metaphorical idea of politeness is distance, as proposed; this study applied these considerations to develop Japanese learners’ knowledge regarding the different politeness degrees and to explore the connection between the metaphorical concept projection and right-hemisphere dominance. Japanese EFL learners do not know certain language strategies (e.g., English requests can be mitigated with biclausal downgraders, including the if-clause with past-tense modal verbs) and have difficulty adjusting the politeness degrees attached to request expressions according to situations. The present study used a pre/post-test design to reaffirm the efficacy of the cognitive technique and its connection to right-hemisphere involvement by mouth asymmetry technique. Mouth asymmetry measurement has been utilized because speech articulation, normally controlled mainly by one side of the brain, causes muscles on the opposite side of the mouth to move more during speech production. The present research did not administer the delayed post-test because it emphasized determining whether metaphor awareness-raising approaches for developing EFL learners’ pragmatic proficiency entailed right-hemisphere activation. Each test contained an acceptability judgment test (AJT) along with a speaking test in the post-test. The study results show that the metaphor awareness-raising group performed significantly better than the control group with regard to acceptability judgment and speaking tests post-test. These data revealed that the metaphor awareness-raising approach could promote L2 learning because it aided input enhancement and concept projection; through these aspects, the participants were able to comprehend an abstract concept: the degree of politeness in terms of the spatial concept of distance. Accordingly, the proximal-distal metaphor enabled the study participants to connect the newly spatio-visualized concept of distance to the different politeness degrees attached to different request expressions; furthermore, they could recall them with the left side of the mouth being wider than the right. This supported certain findings from previous studies that indicated the possible involvement of the brain's right hemisphere in metaphor processing.

Keywords: metaphor awareness-raising, right hemisphere, L2 politeness, mouth asymmetry

Procedia PDF Downloads 154
1779 Intrusion Detection in Computer Networks Using a Hybrid Model of Firefly and Differential Evolution Algorithms

Authors: Mohammad Besharatloo

Abstract:

Intrusion detection is an important research topic in network security because of increasing growth in the use of computer network services. Intrusion detection is done with the aim of detecting the unauthorized use or abuse in the networks and systems by the intruders. Therefore, the intrusion detection system is an efficient tool to control the user's access through some predefined regulations. Since, the data used in intrusion detection system has high dimension, a proper representation is required to show the basis structure of this data. Therefore, it is necessary to eliminate the redundant features to create the best representation subset. In the proposed method, a hybrid model of differential evolution and firefly algorithms was employed to choose the best subset of properties. In addition, decision tree and support vector machine (SVM) are adopted to determine the quality of the selected properties. In the first, the sorted population is divided into two sub-populations. These optimization algorithms were implemented on these sub-populations, respectively. Then, these sub-populations are merged to create next repetition population. The performance evaluation of the proposed method is done based on KDD Cup99. The simulation results show that the proposed method has better performance than the other methods in this context.

Keywords: intrusion detection system, differential evolution, firefly algorithm, support vector machine, decision tree

Procedia PDF Downloads 91
1778 Different Approaches to Teaching a Database Course to Undergraduate and Graduate Students

Authors: Samah Senbel

Abstract:

Database Design is a fundamental part of the Computer Science and Information technology curricula in any school, as well as in the study of management, business administration, and data analytics. In this study, we compare the performance of two groups of students studying the same database design and implementation course at Sacred Heart University in the fall of 2018. Both courses used the same textbook and were taught by the same professor, one for seven graduate students and one for 26 undergraduate students (juniors). The undergraduate students were aged around 20 years old with little work experience, while the graduate students averaged 35 years old and all were employed in computer-related or management-related jobs. The textbook used was 'Database Systems, Design, Implementation, and Management' by Coronel and Morris, and the course was designed to follow the textbook roughly a chapter per week. The first 6 weeks covered the design aspect of a database, followed by a paper exam. The next 6 weeks covered the implementation aspect of the database using SQL followed by a lab exam. Since the undergraduate students are on a 16 week semester, we spend the last three weeks of the course covering NoSQL. This part of the course was not included in this study. After the course was over, we analyze the results of the two groups of students. An interesting discrepancy was observed: In the database design part of the course, the average grade of the graduate students was 92%, while that of the undergraduate students was 77% for the same exam. In the implementation part of the course, we observe the opposite: the average grade of the graduate students was 65% while that of the undergraduate students was 73%. The overall grades were quite similar: the graduate average was 78% and that of the undergraduates was 75%. Based on these results, we concluded that having both classes follow the same time schedule was not beneficial, and an adjustment is needed. The graduates could spend less time on design and the undergraduates would benefit from more design time. In the fall of 2019, 30 students registered for the undergraduate course and 15 students registered for the graduate course. To test our conclusion, the undergraduates spend about 67% of time (eight classes) on the design part of the course and 33% (four classes) on the implementation part, using the exact exams as the previous year. This resulted in an improvement in their average grades on the design part from 77% to 83% and also their implementation average grade from 73% to 79%. In conclusion, we recommend using two separate schedules for teaching the database design course. For undergraduate students, it is important to spend more time on the design part rather than the implementation part of the course. While for the older graduate students, we recommend spending more time on the implementation part, as it seems that is the part they struggle with, even though they have a higher understanding of the design component of databases.

Keywords: computer science education, database design, graduate and undergraduate students, pedagogy

Procedia PDF Downloads 121
1777 Effect of Inoculum Ratio on Dark Fermentative Hydrogen Production

Authors: Zeynep Yilmazer Hitit, Patrick C. Hallenbeck

Abstract:

Fuel reserve requirements due to depletion of fossil fuels have increased interest in biohydrogen since the 1990’s. In fermentative hydrogen production, pure, mixed, and co-cultures can be used to produce hydrogen. Several previous studies have evaluated hydrogen production by pure cultures of Clostridium butyricum or Enterobacter aerogenes. Evaluating hydrogen production by co-culture of these microorganisms is an interestıng approach since E. aerogenes is a facultative microorganism with resistance to oxygen in contrast to the strict anaerobe C. butyricum, and therefore has the ability to maintain anaerobic conditions. It was found that using co-cultures of facultative E. aerogenes (as a reducing agent and H2 producer) and the obligate anaerobe C. butyricum for producing hydrogen increases the yield of hydrogen by about 50% compared to C. butyricum by itself. Also, using different types of microorganisms for hydrogen production eliminates the need to use expensive reducing agents. C. butyricum strain pre-cultured anaerobically at 37 0C for 15h by inoculating 100 mL of GP medium (pH 6.8) consisting of 1% glucose, 2% polypeptone, 0.2% KH2PO4, 0.05% yeast extract, 0.05% MgSO4. 7H2O and E. aerogenes strain was pre-cultured aerobically at 30 0C, 150 rpm for 9 h by inoculating 100 mL of TGY medium (pH 6.8), consisting of 0.1% glucose, 0.5% tryptone, 0.1% K2HPO4, 0.5% yeast extract. All duplicate batch experiments were conducted in 100 mL bottles with different inoculum ratios of Clostridium butyricum and Enterobater aerogenes (C:E) using 5x diluted rich media (GP) consisting of 2 g/L glucose, 4g/L polypeptone, 0.4 g/L KH2PO4, 0.1 g/L yeast extract, 0.1 MgSO4.7H2O. The range of inoculum ratio of C. butyricum to E. aerogenes were 2:1,4:1,8:1, 1:2,1:4, 1:8, 1:0, 0:1. Using glucose as a carbon source aided in the observation of microbial behavior as well as making the effect of inoculum ratio more evident. Nearly all the glucose in the medium was used to produce hydrogen, except at a 1:0 ratio of inoculum (i.e. containing only C. butyricum). Low glucose consumption leads to a higher hydrogen yield due to cumulative hydrogen production and consumption of glucose, but not as much as C:E, 8:1. The lowest hydrogen yield was achieved in 1:8 inoculum ratio of C:E, 71.9 mL, 1.007±0.01 mol H2/mol glucose and the highest cumulative hydrogen, hydrogen yield and dry cell weight were achieved in 8:1 inoculum ratio of C:E, 117.4 mL, 2.035±0.082 mol H2/mol glucose, 0.4 g/L respectively. In this study effect of inoculum ratio on dark fermentative biohydrogen production using C. butyricum and E. aerogenes was investigated. The maximum hydrogen yield of 2.035mol H2/mol glucose was obtained using 2g/L glucose, an initial pH of 6 and an inoculum ratio of C. butyricum to E. aerogenes of 8:1. Results showed that inoculum ratio is an important parameter on hydrogen production due to competition between the two microorganisms in using substrate for growth and production of by-products. The results presented here could be of great significance for further waste management studies using co-culture hydrogen production.

Keywords: biohydrogen, Clostridium butyricum, dark fermentation, Enterobacter aerogenes, inoculum ratio in biohydrogen production

Procedia PDF Downloads 236
1776 The Accuracy of an In-House Developed Computer-Assisted Surgery Protocol for Mandibular Micro-Vascular Reconstruction

Authors: Christophe Spaas, Lies Pottel, Joke De Ceulaer, Johan Abeloos, Philippe Lamoral, Tom De Backer, Calix De Clercq

Abstract:

We aimed to evaluate the accuracy of an in-house developed low-cost computer-assisted surgery (CAS) protocol for osseous free flap mandibular reconstruction. All patients who underwent primary or secondary mandibular reconstruction with a free (solely or composite) osseous flap, either a fibula free flap or iliac crest free flap, between January 2014 and December 2017 were evaluated. The low-cost protocol consisted out of a virtual surgical planning, a prebend custom reconstruction plate and an individualized free flap positioning guide. The accuracy of the protocol was evaluated through comparison of the postoperative outcome with the 3D virtual planning, based on measurement of the following parameters: intercondylar distance, mandibular angle (axial and sagittal), inner angular distance, anterior-posterior distance, length of the fibular/iliac crest segments and osteotomy angles. A statistical analysis of the obtained values was done. Virtual 3D surgical planning and cutting guide design were performed with Proplan CMF® software (Materialise, Leuven, Belgium) and IPS Gate (KLS Martin, Tuttlingen, Germany). Segmentation of the DICOM data as well as outcome analysis were done with BrainLab iPlan® Software (Brainlab AG, Feldkirchen, Germany). A cost analysis of the protocol was done. Twenty-two patients (11 fibula /11 iliac crest) were included and analyzed. Based on voxel-based registration on the cranial base, 3D virtual planning landmark parameters did not significantly differ from those measured on the actual treatment outcome (p-values >0.05). A cost evaluation of the in-house developed CAS protocol revealed a 1750 euro cost reduction in comparison with a standard CAS protocol with a patient-specific reconstruction plate. Our results indicate that an accurate transfer of the planning with our in-house developed low-cost CAS protocol is feasible at a significant lower cost.

Keywords: CAD/CAM, computer-assisted surgery, low-cost, mandibular reconstruction

Procedia PDF Downloads 141
1775 Characteristics and Quality of Chilean Abalone Undergoing Different Drying Emerging Technologies

Authors: Mario Pérez-Won, Anais Palma-Acevedo, Luis González-Cavieres, Roberto Lemus-Mondaca, Gipsy Tabilo-Munizaga

Abstract:

The Chilean abalone (Concholepas Concholepas) is a gastropod mollusk; it has a high commercial value due to the qualities of its meat, especially hardness, as a critical acceptance parameter. However, its main problem is its short shelf-life which is usually extended using traditional technologies with high energy consumption. Therefore, applying different technologies for the pre-treatment and drying process is necessary. In this research, pulsed electric field (PEF) was used as a pre-treatment for vacuum microwave drying (VMD), freeze-drying (FD), and hot-air drying (HAD). Drying conditions and characteristics were set according to previous experiments. The Drying samples were analyzed in terms of physical quality (color, texture, microstructure, and rehydration capacity), protein quality (degree of hydrolysis and computer protein efficiency ratio), and energy parameters. Regarding quality, the treatment that obtained lower harness was PEF+FD (195 N ± 10), the lowest change of color was for treatment PEF+VMD (ΔE: 17 ± 1.5), and the best rehydration capacity was for treatment PEF+VMD (1.2 h for equilibrium). For protein quality, the highest Computer-Protein Efficiency Ratio was the sample 2.0 kV/ cm of PEF (index of 4.18 ± 0.26 at the end of the digestion). Moreover, about energetic consumption, results show that VMD decreases the drying process by 97% whether PEF was used or not. Consequently, it is possible to conclude that using PEF as a pre-treatment for VMD and FD treatments has advantages that must be used following the consumer’s needs or preferences.

Keywords: chilean abalone, freeze-drying, proteins, pulsed electric fields

Procedia PDF Downloads 109
1774 An Examination of the Benefits of Disciplinary Classroom Support of Word Study, Vocabulary and Comprehension for Adolescent Students

Authors: Amanda Watson

Abstract:

The goal of this project is to create the conditions wherein every teacher, especially subjectarea experts, sees themselves as a teacher of language and vocabulary. Assessment and observational data suggest that students are not getting the support they need in vocabulary and reading comprehension, and secondary teachers do not currently have the confidence or expertise to provide this support. This study seeks to examine the impact of 10-20 minutes of daily, targeted instruction around orthography and vocabulary on student competence with the navigation of complex vocabulary and comprehension of subject-specific concepts and texts. The first phase of the pilot included 6 participating classroom teachers of grades 9 and 10 English (95 students in total) who administered an initial reading comprehension assessment. The results of this assessment indicated that the vast majority of students were reading below grade level. Teachers were then provided with a slide deck of complete lessons on orthography, vocabulary (etymology, roots and affixes) and reading comprehension strategies. For five weeks, teachers delivered lessons with their students, implementing the recommended evidence-based teaching strategies. Students and teachers completed surveys to provide feedback on the value and impact of the method. The results confirmed that this was new learning for the students and that the teaching strategies improved engagement. The lessons succeeded in providing equitable access to challenge by simultaneously offering theoretical learning to proficient readers, and exposure and practice to weaker readers. A second reading comprehension was administered after 5 weeks of daily instruction. Average scores increased by 41%, and almost every student experienced progress. The first phase was not long enough to measure the impact of the method on vocabulary acquisition or reading comprehension of subject-specific texts, however. The project will use the results of the first phase to design the second phase, and new teaching and learning strategies will be added. The goals of the second phases are to increase motivation, and to grow the daily practice beyond English class and into science and / or math. This team will continue to document a continuation of the daily lessons, Commented [E1]: Please do not use rhetorical questions in the abstract. measure the impact of the strategies, and address questions about the correlation between daily practice and improvements in the skills students need for vocabulary acquisition and disciplinary reading comprehension.

Keywords: adolescent, comprehension, orthography, reading, vocabulary, etymology, word study, disciplinary, teaching strategies

Procedia PDF Downloads 76
1773 Value-Based Argumentation Frameworks and Judicial Moral Reasoning

Authors: Sonia Anand Knowlton

Abstract:

As the use of Artificial Intelligence is becoming increasingly integrated in virtually every area of life, the need and interest to logically formalize the law and judicial reasoning is growing tremendously. The study of argumentation frameworks (AFs) provides promise in this respect. AF’s provide a way of structuring human reasoning using a formal system of non-monotonic logic. P.M. Dung first introduced this framework and demonstrated that certain arguments must prevail and certain arguments must perish based on whether they are logically “attacked” by other arguments. Dung labelled the set of prevailing arguments as the “preferred extension” of the given argumentation framework. Trevor Bench-Capon’s Value-based Argumentation Frameworks extended Dung’s AF system by allowing arguments to derive their force from the promotion of “preferred” values. In VAF systems, the success of an attack from argument A to argument B (i.e., the triumph of argument A) requires that argument B does not promote a value that is preferred to argument A. There has been thorough discussion of the application of VAFs to the law within the computer science literature, mainly demonstrating that legal cases can be effectively mapped out using VAFs. This article analyses VAFs from a jurisprudential standpoint to provide a philosophical and theoretical analysis of what VAFs tell the legal community about the judicial reasoning, specifically distinguishing between legal and moral reasoning. It highlights the limitations of using VAFs to account for judicial moral reasoning in theory and in practice.

Keywords: nonmonotonic logic, legal formalization, computer science, artificial intelligence, morality

Procedia PDF Downloads 74
1772 The Practice of Teaching Chemistry by the Application of Online Tests

Authors: Nikolina Ribarić

Abstract:

E-learning is most commonly defined as a set of applications and processes, such as Web-based learning, computer-based learning, virtual classrooms, and digital collaboration, that enable access to instructional content through a variety of electronic media. The main goal of an e-learning system is learning, and the way to evaluate the impact of an e-learning system is by examining whether students learn effectively with the help of that system. Testmoz is a program for online preparation of knowledge evaluation assignments. The program provides teachers with computer support during the design of assignments and evaluating them. Students can review and solve assignments and also check the correctness of their solutions. Research into the increase of motivation by the practice of providing teaching content by applying online tests prepared in the Testmoz program was carried out with students of the 8th grade of Ljubo Babić Primary School in Jastrebarsko. The students took the tests in their free time, from home, for an unlimited number of times. SPSS was used to process the data obtained by the research instruments. The results of the research showed that students preferred to practice teaching content and achieved better educational results in chemistry when they had access to online tests for repetition and practicing in relation to subject content which was checked after repetition and practicing in "the classical way" -i.e., solving assignments in a workbook or writing assignments in worksheets.

Keywords: chemistry class, e-learning, motivation, Testmoz

Procedia PDF Downloads 160
1771 Spatial Mapping of Variations in Groundwater of Taluka Islamkot Thar Using GIS and Field Data

Authors: Imran Aziz Tunio

Abstract:

Islamkot is an underdeveloped sub-district (Taluka) in the Tharparkar district Sindh province of Pakistan located between latitude 24°25'19.79"N to 24°47'59.92"N and longitude 70° 1'13.95"E to 70°32'15.11"E. The Islamkot has an arid desert climate and the region is generally devoid of perennial rivers, canals, and streams. It is highly dependent on rainfall which is not considered a reliable surface water source and groundwater is the only key source of water for many centuries. To assess groundwater’s potential, an electrical resistivity survey (ERS) was conducted in Islamkot Taluka. Groundwater investigations for 128 Vertical Electrical Sounding (VES) were collected to determine the groundwater potential and obtain qualitatively and quantitatively layered resistivity parameters. The PASI Model 16 GL-N Resistivity Meter was used by employing a Schlumberger electrode configuration, with half current electrode spacing (AB/2) ranging from 1.5 to 100 m and the potential electrode spacing (MN/2) from 0.5 to 10 m. The data was acquired with a maximum current electrode spacing of 200 m. The data processing for the delineation of dune sand aquifers involved the technique of data inversion, and the interpretation of the inversion results was aided by the use of forward modeling. The measured geo-electrical parameters were examined by Interpex IX1D software, and apparent resistivity curves and synthetic model layered parameters were mapped in the ArcGIS environment using the inverse Distance Weighting (IDW) interpolation technique. Qualitative interpretation of vertical electrical sounding (VES) data shows the number of geo-electrical layers in the area varies from three to four with different resistivity values detected. Out of 128 VES model curves, 42 nos. are 3 layered, and 86 nos. are 4 layered. The resistivity of the first subsurface layers (Loose surface sand) varied from 16.13 Ωm to 3353.3 Ωm and thickness varied from 0.046 m to 17.52m. The resistivity of the second subsurface layer (Semi-consolidated sand) varied from 1.10 Ωm to 7442.8 Ωm and thickness varied from 0.30 m to 56.27 m. The resistivity of the third subsurface layer (Consolidated sand) varied from 0.00001 Ωm to 3190.8 Ωm and thickness varied from 3.26 m to 86.66 m. The resistivity of the fourth subsurface layer (Silt and Clay) varied from 0.0013 Ωm to 16264 Ωm and thickness varied from 13.50 m to 87.68 m. The Dar Zarrouk parameters, i.e. longitudinal unit conductance S is from 0.00024 to 19.91 mho; transverse unit resistance T from 7.34 to 40080.63 Ωm2; longitudinal resistance RS is from 1.22 to 3137.10 Ωm and transverse resistivity RT from 5.84 to 3138.54 Ωm. ERS data and Dar Zarrouk parameters were mapped which revealed that the study area has groundwater potential in the subsurface.

Keywords: electrical resistivity survey, GIS & RS, groundwater potential, environmental assessment, VES

Procedia PDF Downloads 110
1770 Automatic Detection of Defects in Ornamental Limestone Using Wavelets

Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas

Abstract:

A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.

Keywords: automatic detection, defects, fracture lines, wavelets

Procedia PDF Downloads 248
1769 Information Requirements for Vessel Traffic Service Operations

Authors: Fan Li, Chun-Hsien Chen, Li Pheng Khoo

Abstract:

Operators of vessel traffic service (VTS) center provides three different types of services; namely information service, navigational assistance and traffic organization to vessels. To provide these services, operators monitor vessel traffic through computer interface and provide navigational advice based on the information integrated from multiple sources, including automatic identification system (AIS), radar system, and closed circuit television (CCTV) system. Therefore, this information is crucial in VTS operation. However, what information the VTS operator actually need to efficiently and properly offer services is unclear. The aim of this study is to investigate into information requirements for VTS operation. To achieve this aim, field observation was carried out to elicit the information requirements for VTS operation. The study revealed that the most frequent and important tasks were handling arrival vessel report, potential conflict control and abeam vessel report. Current location and vessel name were used in all tasks. Hazard cargo information was particularly required when operators handle arrival vessel report. The speed, the course, and the distance of two or several vessels were only used in potential conflict control. The information requirements identified in this study can be utilized in designing a human-computer interface that takes into consideration what and when information should be displayed, and might be further used to build the foundation of a decision support system for VTS.

Keywords: vessel traffic service, information requirements, hierarchy task analysis, field observation

Procedia PDF Downloads 250
1768 Hybrid Rocket Motor Performance Parameters: Theoretical and Experimental Evaluation

Authors: A. El-S. Makled, M. K. Al-Tamimi

Abstract:

A mathematical model to predict the performance parameters (thrusts, chamber pressures, fuel mass flow rates, mixture ratios, and regression rates during firing time) of hybrid rocket motor (HRM) is evaluated. The internal ballistic (IB) hybrid combustion model assumes that the solid fuel surface regression rate is controlled only by heat transfer (convective and radiative) from flame zone to solid fuel burning surface. A laboratory HRM is designed, manufactured, and tested for low thrust profile space missions (10-15 N) and for validating the mathematical model (computer program). The polymer material and gaseous oxidizer which are selected for this experimental work are polymethyle-methacrylate (PMMA) and polyethylene (PE) as solid fuel grain and gaseous oxygen (GO2) as oxidizer. The variation of various operational parameters with time is determined systematically and experimentally in firing of up to 20 seconds, and an average combustion efficiency of 95% of theory is achieved, which was the goal of these experiments. The comparison between recording fire data and predicting analytical parameters shows good agreement with the error that does not exceed 4.5% during all firing time. The current mathematical (computer) code can be used as a powerful tool for HRM analytical design parameters.

Keywords: hybrid combustion, internal ballistics, hybrid rocket motor, performance parameters

Procedia PDF Downloads 311
1767 Robot Control by ERPs of Brain Waves

Authors: K. T. Sun, Y. H. Tai, H. W. Yang, H. T. Lin

Abstract:

This paper presented the technique of robot control by event-related potentials (ERPs) of brain waves. Based on the proposed technique, severe physical disabilities can free browse outside world. A specific component of ERPs, N2P3, was found and used to control the movement of robot and the view of camera on the designed brain-computer interface (BCI). Users only required watching the stimuli of attended button on the BCI, the evoked potentials of brain waves of the target button, N2P3, had the greatest amplitude among all control buttons. An experimental scene had been constructed that the robot required walking to a specific position and move the view of camera to see the instruction of the mission, and then completed the task. Twelve volunteers participated in this experiment, and experimental results showed that the correct rate of BCI control achieved 80% and the average of execution time was 353 seconds for completing the mission. Four main contributions included in this research: (1) find an efficient component of ERPs, N2P3, for BCI control, (2) embed robot's viewpoint image into user interface for robot control, (3) design an experimental scene and conduct the experiment, and (4) evaluate the performance of the proposed system for assessing the practicability.

Keywords: severe physical disabilities, robot control, event-related potentials (ERPs), brain-computer interface (BCI), brain waves

Procedia PDF Downloads 369
1766 The Effect of Applying the Electronic Supply System on the Performance of the Supply Chain in Health Organizations

Authors: Sameh S. Namnqani, Yaqoob Y. Abobakar, Ahmed M. Alsewehri, Khaled M. AlQethami

Abstract:

The main objective of this research is to know the impact of the application of the electronic supply system on the performance of the supply department of health organizations. To reach this goal, the study adopted independent variables to measure the dependent variable (performance of the supply department), namely: integration with suppliers, integration with intermediaries and distributors and knowledge of supply size, inventory, and demand. The study used the descriptive method and was aided by the questionnaire tool that was distributed to a sample of workers in the Supply Chain Management Department of King Abdullah Medical City. After the statistical analysis, the results showed that: The 70 sample members strongly agree with the (electronic integration with suppliers) axis with a p-value of 0.001, especially with regard to the following: Opening formal and informal communication channels between management and suppliers (Mean 4.59) and exchanging information with suppliers with transparency and clarity (Mean 4.50). It also clarified that the sample members agree on the axis of (electronic integration with brokers and distributors) with a p-value of 0.001 and this is represented in the following elements: Exchange of information between management, brokers and distributors with transparency, clarity (Mean 4.18) , and finding a close cooperation relationship between management, brokers and distributors (Mean 4.13). The results also indicated that the respondents agreed to some extent on the axis (knowledge of the size of supply, stock, and demand) with a p-value of 0.001. It also indicated that the respondents strongly agree with the existence of a relationship between electronic procurement and (the performance of the procurement department in health organizations) with a p-value of 0.001, which is represented in the following: transparency and clarity in dealing with suppliers and intermediaries to prevent fraud and manipulation (Mean 4.50) and reduce the costs of supplying the needs of the health organization (Mean 4.50). From the results, the study recommended several recommendations, the most important of which are: that health organizations work to increase the level of information sharing between them and suppliers in order to achieve the implementation of electronic procurement in the supply management of health organizations. Attention to using electronic data interchange methods and using modern programs that make supply management able to exchange information with brokers and distributors to find out the volume of supply, inventory, and demand. To know the volume of supply, inventory, and demand, it recommended the application of scientific methods of supply for storage. Take advantage of information technology, for example, electronic data exchange techniques and documents, where it can help in contact with suppliers, brokers, and distributors, and know the volume of supply, inventory, and demand, which contributes to improving the performance of the supply department in health organizations.

Keywords: healthcare supply chain, performance, electronic system, ERP

Procedia PDF Downloads 136
1765 Automatic Detection of Sugarcane Diseases: A Computer Vision-Based Approach

Authors: Himanshu Sharma, Karthik Kumar, Harish Kumar

Abstract:

The major problem in crop cultivation is the occurrence of multiple crop diseases. During the growth stage, timely identification of crop diseases is paramount to ensure the high yield of crops, lower production costs, and minimize pesticide usage. In most cases, crop diseases produce observable characteristics and symptoms. The Surveyors usually diagnose crop diseases when they walk through the fields. However, surveyor inspections tend to be biased and error-prone due to the nature of the monotonous task and the subjectivity of individuals. In addition, visual inspection of each leaf or plant is costly, time-consuming, and labour-intensive. Furthermore, the plant pathologists and experts who can often identify the disease within the plant according to their symptoms in early stages are not readily available in remote regions. Therefore, this study specifically addressed early detection of leaf scald, red rot, and eyespot types of diseases within sugarcane plants. The study proposes a computer vision-based approach using a convolutional neural network (CNN) for automatic identification of crop diseases. To facilitate this, firstly, images of sugarcane diseases were taken from google without modifying the scene, background, or controlling the illumination to build the training dataset. Then, the testing dataset was developed based on the real-time collected images from the sugarcane field from India. Then, the image dataset is pre-processed for feature extraction and selection. Finally, the CNN-based Visual Geometry Group (VGG) model was deployed on the training and testing dataset to classify the images into diseased and healthy sugarcane plants and measure the model's performance using various parameters, i.e., accuracy, sensitivity, specificity, and F1-score. The promising result of the proposed model lays the groundwork for the automatic early detection of sugarcane disease. The proposed research directly sustains an increase in crop yield.

Keywords: automatic classification, computer vision, convolutional neural network, image processing, sugarcane disease, visual geometry group

Procedia PDF Downloads 116
1764 Real-Time Finger Tracking: Evaluating YOLOv8 and MediaPipe for Enhanced HCI

Authors: Zahra Alipour, Amirreza Moheb Afzali

Abstract:

In the field of human-computer interaction (HCI), hand gestures play a crucial role in facilitating communication by expressing emotions and intentions. The precise tracking of the index finger and the estimation of joint positions are essential for developing effective gesture recognition systems. However, various challenges, such as anatomical variations, occlusions, and environmental influences, hinder optimal functionality. This study investigates the performance of the YOLOv8m model for hand detection using the EgoHands dataset, which comprises diverse hand gesture images captured in various environments. Over three training processes, the model demonstrated significant improvements in precision (from 88.8% to 96.1%) and recall (from 83.5% to 93.5%), achieving a mean average precision (mAP) of 97.3% at an IoU threshold of 0.7. We also compared YOLOv8m with MediaPipe and an integrated YOLOv8 + MediaPipe approach. The combined method outperformed the individual models, achieving an accuracy of 99% and a recall of 99%. These findings underscore the benefits of model integration in enhancing gesture recognition accuracy and localization for real-time applications. The results suggest promising avenues for future research in HCI, particularly in augmented reality and assistive technologies, where improved gesture recognition can significantly enhance user experience.

Keywords: YOLOv8, mediapipe, finger tracking, joint estimation, human-computer interaction (HCI)

Procedia PDF Downloads 5
1763 Phenomenology of Child Labour in Estates, Farms and Plantations in Zimbabwe: A Comparative Analysis of Tanganda and Eastern Highlands Tea Estates

Authors: Chupicai Manuel

Abstract:

The global efforts to end child labour have been increasingly challenged by adages of global capitalism, inequalities and poverty affecting the global south. In the face the of rising inequalities whose origin can be explained from historical and political economy analysis between the poor and the rich countries, child labour is also on the rise particularly on the global south. The socio-economic and political context of Zimbabwe has undergone serious transition from colonial times through the post-independence normally referred to as the transition period up to the present day. These transitions have aided companies and entities in the business and agriculture sector to exploit child labour while country provided conditions that enhance child labour due to vulnerability of children and anomic child welfare system that plagued the country. Children from marginalised communities dominated by plantations and farms are affected most. This paper explores the experiences and perceptions of children working in tea estates, plantations and farms, and the adults who formerly worked in these plantations during their childhood to share their experiences and perceptions on child labour in Zimbabwe. Childhood theories that view children as apprentices and a human rights perspectives were employed to interrogate the concept of childhood, child labour and poverty alleviation strategies. Phenomenological research design was adopted to describe the experiences of children working in plantations and interpret the meanings they have on their work and livelihoods. The paper drew form 30 children from two plantations through semi-structured interviews and 15 key informant interviews from civil society organisations, international labour organisation, adults who formerly worked in the plantations and the personnel of the plantations. The findings of the study revealed that children work on the farms as an alternative model for survival against economic challenges while the majority cited that poverty compel them to work and get their fees and food paid for. Civil society organisations were of the view that child rights are violated and the welfare system of the country is malfunctional. The perceptions of the majority of the children interviewed are that the system on the plantations is better and this confirmed the socio-constructivist theory that views children as apprentices. The study recommended child sensitive policies and welfare regime that protects children from exploitation together with policing and legal measures that secure child rights.

Keywords: child labour, child rights, phenomenology, poverty reduction

Procedia PDF Downloads 256
1762 An Application of E-Learning Technology for Students with Deafness and Hearing Impairment

Authors: Eyup Bayram Guzel

Abstract:

There have been growing awareness that technology offers unique and promising advantages by offering up-to-data educational materials in promoting teaching and learning materials, new strategies for building enhanced communication environment for people with disabilities and specifically for this study concentrated on the students with deafness and hearing impairments. Creating e-learning environment where teachers and students work in collaboration to develop better educational outcomes is the foremost reason of conducting this research. This study examined the perspectives of special education teachers’ regarding an application of e-learning software called Multimedia Builder on the students with deafness and hearing impairments. Initial and follow up interviews were conducted with 15 special education teachers around the scope of qualitative case study. Grounded approach has been used to analyse and interpret the data. The research results revealed that application of Multimedia Builder software were influential on reading, sign language, vocabulary improvements, computer and ICT usage developments and on audio-visual learning achievements for the advantages of students with deafness and hearing impairments. The implications of the study encouraged the ways of using e-learning tools and strategies to promote unique and comprehensive learning experiences for the targeted students and their teachers.

Keywords: e-learning, special education, deafness and hearing impairment, computer-ICT usage.

Procedia PDF Downloads 438
1761 Cryoinjuries in Sperm Cells: Effect of Adaptation of Steps in Cryopreservation Protocol for Boar Semen upon Post-Thaw Sperm Quality

Authors: Aftab Ali

Abstract:

Cryopreservation of semen is one of the key factors for a successful breeding business along with other factors. To achieve high fertility in boar, one should know about spermatozoa response to different treatments proceeds during cryopreservation. The running project is highly focused on cryopreservation and its effects on sperm quality parameters in both boar and bull semen. Semen sample from A, B, C, and D, were subjected to different thawing conditions and were analyzed upon different treatments in the study. Parameters like sperm cell motility, viability, acrosome, DNA integrity, and phospholipase C zeta were detected by different established methods. Different techniques were used to assess different parameters. Motility was detected using computer assisted sperm analysis, phospholipase C zeta using luminometry while viability, acrosome integrity, and DNA integrity were analyzed using flow cytometry. Thawing conditions were noted to have an effect on sperm quality parameters with motility being the most critical parameter. The results further indicated that the most critical step during cryopreservation of boar semen is when sperm cells are subjected to freezing and thawing. The findings of the present study provide insight that; boar semen cryopreservation is still suboptimal in comparison to bull semen cryopreservation. Thus, there is a need to conduct more research to improve the fertilizing potential of cryopreserved boar semen.

Keywords: cryopreservation, computer assisted sperm, flow cytometry, luminometry

Procedia PDF Downloads 148
1760 Pedagogical Variation with Computers in Mathematics Classrooms: A Cultural Historical Activity Theory Analysis

Authors: Joanne Hardman

Abstract:

South Africa’s crisis in mathematics attainment is well documented. To meet the need to develop students’ mathematical performance in schools the government has launched various initiatives using computers to impact on mathematical attainment. While it is clear that computers can change pedagogical practices, there is a dearth of qualitative studies indicating exactly how pedagogy is transformed with Information Communication Technologies (ICTs) in a teaching activity. Consequently, this paper addresses the following question: how, along which dimensions in an activity, does pedagogy alter with the use of computer drill and practice software in four disadvantaged grade 6 mathematics classrooms in the Western Cape province of South Africa? The paper draws on Cultural Historical Activity Theory (CHAT) to develop a view of pedagogy as socially situated. Four ideal pedagogical types are identified: Reinforcement pedagogy, which has the reinforcement of specialised knowledge as its object; Collaborative pedagogy, which has the development of metacognitive engagement with specialised knowledge as its object; Directive pedagogy, which has the development of technical task skills as its object, and finally, Defensive pedagogy, which has student regulation as its object. Face-to-face lessons were characterised as predominantly Reinforcement and Collaborative pedagogy and most computer lessons were characterised as mainly either Defensive or Directive.

Keywords: computers, cultural historical activity theory, mathematics, pedagogy

Procedia PDF Downloads 282
1759 Review and Evaluation of Trending Canonical Correlation Analyses-Based Brain Computer Interface Methods

Authors: Bayar Shahab

Abstract:

The fast development of technology that has advanced neuroscience and human interaction with computers has enabled solutions to various problems, and issues of this new era have been found and are being found like no other time in history. Brain-computer interface so-called BCI has opened the door to several new research areas and have been able to provide solutions to critical and important issues such as supporting a paralyzed patient to interact with the outside world, controlling a robot arm, playing games in VR with the brain, driving a wheelchair or even a car and neurotechnology enabled the rehabilitation of the lost memory, etc. This review work presents state-of-the-art methods and improvements of canonical correlation analyses (CCA), which is an SSVEP-based BCI method. These are the methods used to extract EEG signal features or, to be said in a different way, the features of interest that we are looking for in the EEG analyses. Each of the methods from oldest to newest has been discussed while comparing their advantages and disadvantages. This would create a great context and help researchers to understand the most state-of-the-art methods available in this field with their pros and cons, along with their mathematical representations and usage. This work makes a vital contribution to the existing field of study. It differs from other similar recently published works by providing the following: (1) stating most of the prominent methods used in this field in a hierarchical way (2) explaining pros and cons of each method and their performance (3) presenting the gaps that exist at the end of each method that can open the understanding and doors to new research and/or improvements.

Keywords: BCI, CCA, SSVEP, EEG

Procedia PDF Downloads 145
1758 An International Curriculum Development for Languages and Technology

Authors: Miguel Nino

Abstract:

When considering the challenges of a changing and demanding globalizing world, it is important to reflect on how university students will be prepared for the realities of internationalization, marketization and intercultural conversation. The present study is an interdisciplinary program designed to respond to the needs of the global community. The proposal bridges the humanities and science through three different fields: Languages, graphic design and computer science, specifically, fundamentals of programming such as python, java script and software animation. Therefore, the goal of the four year program is twofold: First, enable students for intercultural communication between English and other languages such as Spanish, Mandarin, French or German. Second, students will acquire knowledge in practical software and relevant employable skills to collaborate in assisted computer projects that most probable will require essential programing background in interpreted or compiled languages. In order to become inclusive and constructivist, the cognitive linguistics approach is suggested for the three different fields, particularly for languages that rely on the traditional method of repetition. This methodology will help students develop their creativity and encourage them to become independent problem solving individuals, as languages enhance their common ground of interaction for culture and technology. Participants in this course of study will be evaluated in their second language acquisition at the Intermediate-High level. For graphic design and computer science students will apply their creative digital skills, as well as their critical thinking skills learned from the cognitive linguistics approach, to collaborate on a group project design to find solutions for media web design problems or marketing experimentation for a company or the community. It is understood that it will be necessary to apply programming knowledge and skills to deliver the final product. In conclusion, the program equips students with linguistics knowledge and skills to be competent in intercultural communication, where English, the lingua franca, remains the medium for marketing and product delivery. In addition to their employability, students can expand their knowledge and skills in digital humanities, computational linguistics, or increase their portfolio in advertising and marketing. These students will be the global human capital for the competitive globalizing community.

Keywords: curriculum, international, languages, technology

Procedia PDF Downloads 443
1757 Automatic Identification and Monitoring of Wildlife via Computer Vision and IoT

Authors: Bilal Arshad, Johan Barthelemy, Elliott Pilton, Pascal Perez

Abstract:

Getting reliable, informative, and up-to-date information about the location, mobility, and behavioural patterns of animals will enhance our ability to research and preserve biodiversity. The fusion of infra-red sensors and camera traps offers an inexpensive way to collect wildlife data in the form of images. However, extracting useful data from these images, such as the identification and counting of animals remains a manual, time-consuming, and costly process. In this paper, we demonstrate that such information can be automatically retrieved by using state-of-the-art deep learning methods. Another major challenge that ecologists are facing is the recounting of one single animal multiple times due to that animal reappearing in other images taken by the same or other camera traps. Nonetheless, such information can be extremely useful for tracking wildlife and understanding its behaviour. To tackle the multiple count problem, we have designed a meshed network of camera traps, so they can share the captured images along with timestamps, cumulative counts, and dimensions of the animal. The proposed method takes leverage of edge computing to support real-time tracking and monitoring of wildlife. This method has been validated in the field and can be easily extended to other applications focusing on wildlife monitoring and management, where the traditional way of monitoring is expensive and time-consuming.

Keywords: computer vision, ecology, internet of things, invasive species management, wildlife management

Procedia PDF Downloads 138
1756 Microplastics in the Seine River Catchment: Results and Lessons from a Pluriannual Research Programme

Authors: Bruno Tassin, Robin Treilles, Cleo Stratmann, Minh Trang Nguyen, Sam Azimi, Vincent Rocher, Rachid Dris, Johnny Gasperi

Abstract:

Microplastics (<5mm) in the environment and in hydro systems is one of the major present environmental issues. Over the last five years a research programme was conducted in order to assess the behavior of microplastics in the Seine river catchment, in a Man-Land-Sea continuum approach. Results show that microplastic concentration varies at the seasonal scale, but also at much smaller scales, during flood events and with tides in the estuary for instance. Moreover, microplastic sampling and characterization issues emerged throughout this work. The Seine river is a 750km long river flowing in Northwestern France. It crosses the Paris megacity (12 millions inhabitants) and reaches the English Channel after a 170 km long estuary. This site is a very relevant one to assess the effect of anthropogenic pollution as the mean river flow is low (mean flow around 350m³/s) while the human presence and activities are very intense. Monthly monitoring of the microplastic concentration took place over a 19-month period and showed significant temporal variations at all sampling stations but no significant upstream-downstream increase, indicating a possible major sink to the sediment. At the scale of a major flood event (winter and spring 2018), microplastic concentration shows an evolution similar to the well-known suspended solids concentration, with an increase during the increase of the flow and a decrease during the decrease of the flow. Assessing the position of the concentration peak in relation to the flow peak was unfortunately impossible. In the estuary, concentrations vary with time in connection with tides movements and in the water column in relation to the salinity and the turbidity. Although major gains of knowledge on the microplastic dynamics in the Seine river have been obtained over the last years, major gaps remain to deal mostly with the interaction with the dynamics of the suspended solids, the selling processes in the water column and the resuspension by navigation or shear stress increase. Moreover, the development of efficient chemical characterization techniques during the 5 year period of this pluriannual research programme led to the improvement of the sampling techniques in order to access smaller microplastics (>10µm) as well as larger but rare ones (>500µm).

Keywords: microplastics, Paris megacity, seine river, suspended solids

Procedia PDF Downloads 198
1755 Analysis of the Treatment Hemorrhagic Stroke in Multidisciplinary City Hospital №1 Nur-Sultan

Authors: M. G. Talasbayen, N. N. Dyussenbayev, Y. D. Kali, R. A. Zholbarysov, Y. N. Duissenbayev, I. Z. Mammadinova, S. M. Nuradilov

Abstract:

Background. Hemorrhagic stroke is an acute cerebrovascular accident resulting from rupture of a cerebral vessel or increased permeability of the wall and imbibition of blood into the brain parenchyma. Arterial hypertension is a common cause of hemorrhagic stroke. Male gender and age over 55 years is a risk factor for intracerebral hemorrhage. Treatment of intracerebral hemorrhage is aimed at the primary pathophysiological link: the relief of coagulopathy and the control of arterial hypertension. Early surgical treatment can limit cerebral compression; prevent toxic effects of blood to the brain parenchyma. Despite progress in the development of neuroimaging data, the use of minimally invasive techniques, and navigation system, mortality from intracerebral hemorrhage remains high. Materials and methods. The study included 78 patients (62.82% male and 37.18% female) with a verified diagnosis of hemorrhagic stroke in the period from 2019 to 2021. The age of patients ranged from 25 to 80 years, the average age was 54.66±11.9 years. Demographic, brain CT data (localization, volume of hematomas), methods of treatment, and disease outcome were analyzed. Results. The retrospective analyze demonstrate that 78.2% of all patients underwent surgical treatment: decompressive craniectomy in 37.7%, craniotomy with hematoma evacuation in 29.5%, and hematoma draining in 24.59% cases. The study of the proportion of deaths, depending on the volume of intracerebral hemorrhage, shows that the number of deaths was higher in the group with a hematoma volume of more than 60 ml. Evaluation of the relationship between the time before surgery and mortality demonstrates that the most favorable outcome is observed during surgical treatment in the interval from 3 to 24 hours. Mortality depending on age did not reveal a significant difference between age groups. An analysis of the impact of the surgery type on mortality reveals that decompressive craniectomy with or without hematoma evacuation led to an unfavorable outcome in 73.9% of cases, while craniotomy with hematoma evacuation and drainage led to mortality only in 28.82% cases. Conclusion. Even though the multimodal approaches, the development of surgical techniques and equipment, and the selection of optimal conservative therapy, the question of determining the tactics of managing and treating hemorrhagic strokes is still controversial. Nevertheless, our experience shows that surgical intervention within 24 hours from the moment of admission and craniotomy with hematoma evacuation improves the prognosis of treatment outcomes.

Keywords: hemorragic stroke, Intracerebral hemorrhage, surgical treatment, stroke mortality

Procedia PDF Downloads 106