Search results for: detecting architectural tactics
677 The Analysis of Deceptive and Truthful Speech: A Computational Linguistic Based Method
Authors: Seham El Kareh, Miramar Etman
Abstract:
Recently, detecting liars and extracting features which distinguish them from truth-tellers have been the focus of a wide range of disciplines. To the author’s best knowledge, most of the work has been done on facial expressions and body gestures but only few works have been done on the language used by both liars and truth-tellers. This paper sheds light on four axes. The first axis copes with building an audio corpus for deceptive and truthful speech for Egyptian Arabic speakers. The second axis focuses on examining the human perception of lies and proving our need for computational linguistic-based methods to extract features which characterize truthful and deceptive speech. The third axis is concerned with building a linguistic analysis program that could extract from the corpus the inter- and intra-linguistic cues for deceptive and truthful speech. The program built here is based on selected categories from the Linguistic Inquiry and Word Count program. Our results demonstrated that Egyptian Arabic speakers on one hand preferred to use first-person pronouns and present tense compared to the past tense when lying and their lies lacked of second-person pronouns, and on the other hand, when telling the truth, they preferred to use the verbs related to motion and the nouns related to time. The results also showed that there is a need for bigger data to prove the significance of words related to emotions and numbers.Keywords: Egyptian Arabic corpus, computational analysis, deceptive features, forensic linguistics, human perception, truthful features
Procedia PDF Downloads 206676 Forensic Methods Used for the Verification of the Authenticity of Prints
Authors: Olivia Rybak-Karkosz
Abstract:
This paper aims to present the results of scientific research on methods of forging art prints and their elements, such as signature or provenance and forensic science methods that might be used to verify their authenticity. In the last decades, the art market has observed significant interest in purchasing prints. They are considered an economical alternative to paintings and a considerable investment. However, the authenticity of an art print is difficult to establish as similar visual effects might be achieved with drawings or xerox. The latter is easy to make using a home printer. They are then offered on flea markets or internet auctions as genuine prints. This probable ease of forgery and, at the same time, the difficulty of distinguishing art print techniques were the main reasons why this research was undertaken. A lack of scientific methods dedicated to disclosing a forgery encouraged the author to verify the possibility of using forensic science's methods known and used in other fields of expertise. This research methodology consisted of completing representative forgery samples collected in selected museums based in Poland and a few in Germany and Austria. That allowed the author to present a typology of methods used to forge art prints. Given that one of the most famous graphic design examples is bills and securities, it seems only appropriate to propose in print verification the usage of methods of detecting counterfeit currency. These methods contain an examination of ink, paper, and watermarks. On prints, additionally, signatures and imprints of stamps, etc., are forged as well. So the examination should be completed with handwriting examination and forensic sphragistics. The paper contains a stipulation to conduct a complex analysis of authenticity with the participation of an art restorer, art historian, and forensic expert as head of this team.Keywords: art forgery, examination of an artwork, handwriting analysis, prints
Procedia PDF Downloads 129675 Beam Coding with Orthogonal Complementary Golay Codes for Signal to Noise Ratio Improvement in Ultrasound Mammography
Authors: Y. Kumru, K. Enhos, H. Köymen
Abstract:
In this paper, we report the experimental results on using complementary Golay coded signals at 7.5 MHz to detect breast microcalcifications of 50 µm size. Simulations using complementary Golay coded signals show perfect consistence with the experimental results, confirming the improved signal to noise ratio for complementary Golay coded signals. For improving the success on detecting the microcalcifications, orthogonal complementary Golay sequences having cross-correlation for minimum interference are used as coded signals and compared to tone burst pulse of equal energy in terms of resolution under weak signal conditions. The measurements are conducted using an experimental ultrasound research scanner, Digital Phased Array System (DiPhAS) having 256 channels, a phased array transducer with 7.5 MHz center frequency and the results obtained through experiments are validated by Field-II simulation software. In addition, to investigate the superiority of coded signals in terms of resolution, multipurpose tissue equivalent phantom containing series of monofilament nylon targets, 240 µm in diameter, and cyst-like objects with attenuation of 0.5 dB/[MHz x cm] is used in the experiments. We obtained ultrasound images of monofilament nylon targets for the evaluation of resolution. Simulation and experimental results show that it is possible to differentiate closely positioned small targets with increased success by using coded excitation in very weak signal conditions.Keywords: coded excitation, complementary golay codes, DiPhAS, medical ultrasound
Procedia PDF Downloads 263674 Local Interpretable Model-agnostic Explanations (LIME) Approach to Email Spam Detection
Authors: Rohini Hariharan, Yazhini R., Blessy Maria Mathew
Abstract:
The task of detecting email spam is a very important one in the era of digital technology that needs effective ways of curbing unwanted messages. This paper presents an approach aimed at making email spam categorization algorithms transparent, reliable and more trustworthy by incorporating Local Interpretable Model-agnostic Explanations (LIME). Our technique assists in providing interpretable explanations for specific classifications of emails to help users understand the decision-making process by the model. In this study, we developed a complete pipeline that incorporates LIME into the spam classification framework and allows creating simplified, interpretable models tailored to individual emails. LIME identifies influential terms, pointing out key elements that drive classification results, thus reducing opacity inherent in conventional machine learning models. Additionally, we suggest a visualization scheme for displaying keywords that will improve understanding of categorization decisions by users. We test our method on a diverse email dataset and compare its performance with various baseline models, such as Gaussian Naive Bayes, Multinomial Naive Bayes, Bernoulli Naive Bayes, Support Vector Classifier, K-Nearest Neighbors, Decision Tree, and Logistic Regression. Our testing results show that our model surpasses all other models, achieving an accuracy of 96.59% and a precision of 99.12%.Keywords: text classification, LIME (local interpretable model-agnostic explanations), stemming, tokenization, logistic regression.
Procedia PDF Downloads 47673 Analysis of Saudi Breast Cancer Patients’ Primary Tumors using Array Comparative Genomic Hybridization
Authors: L. M. Al-Harbi, A. M. Shokry, J. S. M. Sabir, A. Chaudhary, J. Manikandan, K. S. Saini
Abstract:
Breast cancer is the second most common cause of cancer death worldwide and is the most common malignancy among Saudi females. During breast carcinogenesis, a wide-array of cytogenetic changes involving deletions, or amplification, or translocations, of part or whole of chromosome regions have been observed. Because of the limitations of various earlier technologies, newer tools are developed to scan for changes at the genomic level. Recently, Array Comparative Genomic Hybridization (aCGH) technique has been applied for detecting segmental genomic alterations at molecular level. In this study, aCGH was performed on twenty breast cancer tumors and their matching non-tumor (normal) counterparts using the Agilent 2x400K. Several regions were identified to be either amplified or deleted in a tumor-specific manner. Most frequent alterations were amplification of chromosome 1q, chromosome 8q, 20q, and deletions at 16q were also detected. The amplification of genetic events at 1q and 8q were further validated using FISH analysis using probes targeting 1q25 and 8q (MYC gene). The copy number changes at these loci can potentially cause a significant change in the tumor behavior, as deletions in the E-Cadherin (CDH1)-tumor suppressor gene as well as amplification of the oncogenes-Aurora Kinase A. (AURKA) and MYC could make these tumors highly metastatic. This study validates the use of aCGH in Saudi breast cancer patients and sets the foundations necessary for performing larger cohort studies searching for ethnicity-specific biomarkers and gene copy number variations.Keywords: breast cancer, molecular biology, ecology, environment
Procedia PDF Downloads 376672 Quantification Model for Capability Evaluation of Optical-Based in-Situ Monitoring System for Laser Powder Bed Fusion (LPBF) Process
Authors: Song Zhang, Hui Wang, Johannes Henrich Schleifenbaum
Abstract:
Due to the increasing demand for quality assurance and reliability for additive manufacturing, the development of an advanced in-situ monitoring system is required to monitor the process anomalies as input for further process control. Optical-based monitoring systems, such as CMOS cameras and NIR cameras, are proved as effective ways to monitor the geometrical distortion and exceptional thermal distribution. Therefore, many studies and applications are focusing on the availability of the optical-based monitoring system for detecting varied types of defects. However, the capability of the monitoring setup is not quantified. In this study, a quantification model to evaluate the capability of the monitoring setups for the LPBF machine based on acquired monitoring data of a designed test artifact is presented, while the design of the relevant test artifacts is discussed. The monitoring setup is evaluated based on its hardware properties, location of the integration, and light condition. Methodology of data processing to quantify the capacity for each aspect is discussed. The minimal capability of the detectable size of the monitoring set up in the application is estimated by quantifying its resolution and accuracy. The quantification model is validated using a CCD camera-based monitoring system for LPBF machines in the laboratory with different setups. The result shows the model to quantify the monitoring system's performance, which makes the evaluation of monitoring systems with the same concept but different setups possible for the LPBF process and provides the direction to improve the setups.Keywords: data processing, in-situ monitoring, LPBF process, optical system, quantization model, test artifact
Procedia PDF Downloads 197671 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code
Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader
Abstract:
In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.Keywords: bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset
Procedia PDF Downloads 130670 A Numerical Computational Method of MRI Static Magnetic Field for an Ergonomic Facility Design Guidelines
Authors: Sherine Farrag
Abstract:
Magnetic resonance imaging (MRI) presents safety hazards, with the general physical environment. The principal hazard of the MRI is the presence of static magnetic fields. Proper architectural design of MRI’s room ensure environment and health care staff safety. This research paper presents an easy approach for numerical computation of fringe static magnetic fields. Iso-gauss line of different MR intensities (0.3, 0.5, 1, 1.5 Tesla) was mapped and a polynomial function of the 7th degree was generated and tested. Matlab script was successfully applied for MRI SMF mapping. This method can be valid for any kind of commercial scanner because it requires only the knowledge of the MR scanner room map with iso-gauss lines. Results help to develop guidelines to guide healthcare architects to design of a safer Magnetic resonance imaging suite.Keywords: designing MRI suite, MRI safety, radiology occupational exposure, static magnetic fields
Procedia PDF Downloads 485669 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization
Authors: R. O. Osaseri, A. R. Usiobaifo
Abstract:
The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault
Procedia PDF Downloads 322668 The Home as Memory Palace: Three Case Studies of Artistic Representations of the Relationship between Individual and Collective Memory and the Home
Authors: Laura M. F. Bertens
Abstract:
The houses we inhabit are important containers of memory. As homes, they take on meaning for those who live inside, and memories of family life become intimately tied up with rooms, windows, and gardens. Each new family creates a new layer of meaning, resulting in a palimpsest of family memory. These houses function quite literally as memory palaces, as a walk through a childhood home will show; each room conjures up images of past events. Over time, these personal memories become woven together with the cultural memory of countries and generations. The importance of the home is a central theme in art, and several contemporary artists have a special interest in the relationship between memory and the home. This paper analyses three case studies in order to get a deeper understanding of the ways in which the home functions and feels like a memory palace, both on an individual and on a collective, cultural level. Close reading of the artworks is performed on the theoretical intersection between Art History and Cultural Memory Studies. The first case study concerns works from the exhibition Mnemosyne by the artist duo Anne and Patrick Poirier. These works combine interests in architecture, archaeology, and psychology. Models of cities and fantastical architectural designs resemble physical structures (such as the brain), architectural metaphors used in representing the concept of memory (such as the memory palace), and archaeological remains, essential to our shared cultural memories. Secondly, works by Do Ho Suh will help us understand the relationship between the home and memory on a far more personal level; outlines of rooms from his former homes, made of colourful, transparent fabric and combined into new structures, provide an insight into the way these spaces retain individual memories. The spaces have been emptied out, and only the husks remain. Although the remnants of walls, light switches, doors, electricity outlets, etc. are standard, mass-produced elements found in many homes and devoid of inherent meaning, together they remind us of the emotional significance attached to the muscle memory of spaces we once inhabited. The third case study concerns an exhibition in a house put up for sale on the Dutch real estate website Funda. The house was built in 1933 by a Jewish family fleeing from Germany, and the father and son were later deported and killed. The artists Anne van As and CA Wertheim have used the history and memories of the house as a starting point for an exhibition called (T)huis, a combination of the Dutch words for home and house. This case study illustrates the way houses become containers of memories; each new family ‘resets’ the meaning of a house, but traces of earlier memories remain. The exhibition allows us to explore the transition of individual memories into shared cultural memory, in this case of WWII. Taken together, the analyses provide a deeper understanding of different facets of the relationship between the home and memory, both individual and collective, and the ways in which art can represent these.Keywords: Anne and Patrick Poirier, cultural memory, Do Ho Suh, home, memory palace
Procedia PDF Downloads 159667 Offline High Voltage Diagnostic Test Findings on 15MVA Generator of Basochhu Hydropower Plant
Authors: Suprit Pradhan, Tshering Yangzom
Abstract:
Even with availability of the modern day online insulation diagnostic technologies like partial discharge monitoring, the measurements like Dissipation Factor (tanδ), DC High Voltage Insulation Currents, Polarization Index (PI) and Insulation Resistance Measurements are still widely used as a diagnostic tools to assess the condition of stator insulation in hydro power plants. To evaluate the condition of stator winding insulation in one of the generators that have been operated since 1999, diagnostic tests were performed on the stator bars of 15 MVA generators of Basochhu Hydropower Plant. This paper presents diagnostic study done on the data gathered from the measurements which were performed in 2015 and 2016 as part of regular maintenance as since its commissioning no proper aging data were maintained. Measurement results of Dissipation Factor, DC High Potential tests and Polarization Index are discussed with regard to their effectiveness in assessing the ageing condition of the stator insulation. After a brief review of the theoretical background, the strengths of each diagnostic method in detecting symptoms of insulation deterioration are identified. The interesting results observed from Basochhu Hydropower Plant is taken into consideration to conclude that Polarization Index and DC High Voltage Insulation current measurements are best suited for the detection of humidity and contamination problems and Dissipation Factor measurement is a robust indicator of long-term ageing caused by oxidative degradation.Keywords: dissipation Factor (tanδ), polarization Index (PI), DC High Voltage Insulation Current, insulation resistance (IR), Tan Delta Tip-Up, dielectric absorption ratio
Procedia PDF Downloads 312666 Investigating the Neural Heterogeneity of Developmental Dyscalculia
Authors: Fengjuan Wang, Azilawati Jamaludin
Abstract:
Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity
Procedia PDF Downloads 53665 The Planning Criteria of Block-Unit Redevelopment to Improve Residential Environment: Focused on Redevelopment Project in Seoul
Authors: Hong-Nam Choi, Hyeong-Wook Song, Sungwan Hong, Hong-Kyu Kim
Abstract:
In Korea, elements that decide the quality of residential environment are not only diverse, but show deviation as well. However, people do not consider these elements and instead, they try to settle the uniformed style of residential environment, which focuses on the construction development of apartment housing and business based plans. Recently, block-unit redevelopment is becoming the standout alternative plan of standardize redevelopment projects, but constructions become inefficient because of indefinite planning criteria. In conclusion, the following research is about analyzing and categorizing the development method and legal ground of redevelopment project district, plan determinant and applicable standard. The purpose of this study is to become a basis in compatible analysis of planning standards that will happen in the future.Keywords: shape restrictions, improvement of regulation, diversity of residential environment, classification of redevelopment project, planning criteria of redevelopment, special architectural district (SAD)
Procedia PDF Downloads 484664 SEM Detection of Folate Receptor in a Murine Breast Cancer Model Using Secondary Antibody-Conjugated, Gold-Coated Magnetite Nanoparticles
Authors: Yasser A. Ahmed, Juleen M Dickson, Evan S. Krystofiak, Julie A. Oliver
Abstract:
Cancer cells urgently need folate to support their rapid division. Folate receptors (FR) are over-expressed on a wide range of tumor cells, including breast cancer cells. FR are distributed over the entire surface of cancer cells, but are polarized to the apical surface of normal cells. Targeting of cancer cells using specific surface molecules such as folate receptors may be one of the strategies used to kill cancer cells without hurting the neighing normal cells. The aim of the current study was to try a method of SEM detecting FR in a murine breast cancer cell model (4T1 cells) using secondary antibody conjugated to gold or gold-coated magnetite nanoparticles. 4T1 cells were suspended in RPMI medium witth FR antibody and incubated with secondary antibody for fluorescence microscopy. The cells were cultured on 30mm Thermanox coverslips for 18 hours, labeled with FR antibody then incubated with secondary antibody conjugated to gold or gold-coated magnetite nanoparticles and processed to scanning electron microscopy (SEM) analysis. The fluorescence microscopy study showed strong punctate FR expression on 4T1 cell membrane. With SEM, the labeling with gold or gold-coated magnetite conjugates showed a similar pattern. Specific labeling occurred in nanoparticle clusters, which are clearly visualized in backscattered electron images. The 4T1 tumor cell model may be useful for the development of FR-targeted tumor therapy using gold-coated magnetite nano-particles.Keywords: cancer cell, nanoparticles, cell culture, SEM
Procedia PDF Downloads 734663 Drawing, Design and Building Information Modelling (BIM): Embedding Advanced Digital Tools in the Academy Programs for Building Engineers and Architects
Authors: Vittorio Caffi, Maria Pignataro, Antonio Cosimo Devito, Marco Pesenti
Abstract:
This paper deals with the integration of advanced digital design and modelling tools and methodologies, known as Building Information Modelling, into the traditional Academy educational programs for building engineers and architects. Nowadays, the challenge the Academy has to face is to present the new tools and their features to the pupils, making sure they acquire the proper skills in order to leverage the potential they offer also for the other courses embedded in the educational curriculum. The syllabus here presented refers to the “Drawing for building engineering”, “2D and 3D laboratory” and “3D modelling” curricula of the MSc in Building Engineering of the Politecnico di Milano. Such topics, included since the first year in the MSc program, are fundamental to give the students the instruments to master the complexity of an architectural or building engineering project with digital tools, so as to represent it in its various forms.Keywords: BIM, BIM curricula, computational design, digital modelling
Procedia PDF Downloads 669662 A Physiological Approach for Early Detection of Hemorrhage
Authors: Rabie Fadil, Parshuram Aarotale, Shubha Majumder, Bijay Guargain
Abstract:
Hemorrhage is the loss of blood from the circulatory system and leading cause of battlefield and postpartum related deaths. Early detection of hemorrhage remains the most effective strategy to reduce mortality rate caused by traumatic injuries. In this study, we investigated the physiological changes via non-invasive cardiac signals at rest and under different hemorrhage conditions simulated through graded lower-body negative pressure (LBNP). Simultaneous electrocardiogram (ECG), photoplethysmogram (PPG), blood pressure (BP), impedance cardiogram (ICG), and phonocardiogram (PCG) were acquired from 10 participants (age:28 ± 6 year, weight:73 ± 11 kg, height:172 ± 8 cm). The LBNP protocol consisted of applying -20, -30, -40, -50, and -60 mmHg pressure to the lower half of the body. Beat-to-beat heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP), and mean aerial pressure (MAP) were extracted from ECG and blood pressure. Systolic amplitude (SA), systolic time (ST), diastolic time (DT), and left ventricle Ejection time (LVET) were extracted from PPG during each stage. Preliminary results showed that the application of -40 mmHg i.e. moderate stage simulated hemorrhage resulted significant changes in HR (85±4 bpm vs 68 ± 5bpm, p < 0.01), ST (191 ± 10 ms vs 253 ± 31 ms, p < 0.05), LVET (350 ± 14 ms vs 479 ± 47 ms, p < 0.05) and DT (551 ± 22 ms vs 683 ± 59 ms, p < 0.05) compared to rest, while no change was observed in SA (p > 0.05) as a consequence of LBNP application. These findings demonstrated the potential of cardiac signals in detecting moderate hemorrhage. In future, we will analyze all the LBNP stages and investigate the feasibility of other physiological signals to develop a predictive machine learning model for early detection of hemorrhage.Keywords: blood pressure, hemorrhage, lower-body negative pressure, LBNP, machine learning
Procedia PDF Downloads 167661 The Role of Intraluminal Endoscopy in the Diagnosis and Treatment of Fluid Collections in Patients With Acute Pancreatitis
Authors: A. Askerov, Y. Teterin, P. Yartcev, S. Novikov
Abstract:
Introduction: Acute pancreatitis (AP) is a socially significant problem for public health and continues to be one of the most common causes of hospitalization of patients with pathology of the gastrointestinal tract. It is characterized by high mortality rates, which reaches 62-65% in infected pancreatic necrosis. Aims & Methods: The study group included 63 patients who underwent transluminal drainage (TLD) fluid collection (FC). All patients were performed transabdominal ultrasound, computer tomography of the abdominal cavity and retroperitoneal organs and endoscopic ultrasound (EUS) of the pancreatobiliary zone. The EUS was used as a final diagnostic method to determine the characteristics of FC. The indications for TLD were: the distance between the wall of the hollow organ and the FC was not more than 1 cm, the absence of large vessels on the puncture trajectory (more than 3 mm), and the size of the formation was more than 5 cm. When a homogeneous cavity with clear, even contours was detected, a plastic stent with rounded ends (“double pig tail”) was installed. The indication for the installation of a fully covered self-expanding stent was the detection of nonhomogeneous anechoic FC with hyperechoic inclusions and cloudy purulent contents. In patients with necrotic forms after drainage of the purulent cavity, a cystonasal drainage with a diameter of 7Fr was installed in its lumen under X-ray control to sanitize the cavity with a 0.05% aqueous solution of chlorhexidine. Endoscopic necrectomy was performed every 24-48 hours. The plastic stent was removed in 6 month, the fully covered self-expanding stent - in 1 month after the patient was discharged from the hospital. Results: Endoscopic TLD was performed in 63 patients. The FC corresponding to interstitial edematous pancreatitis was detected in 39 (62%) patients who underwent TLD with the installation of a plastic stent with rounded ends. In 24 (38%) patients with necrotic forms of FC, a fully covered self-expanding stent was placed. Communication with the ductal system of the pancreas was found in 5 (7.9%) patients. They underwent pancreaticoduodenal stenting. A complicated postoperative period was noted in 4 (6.3%) cases and was manifested by bleeding from the zone of pancreatogenic destruction. In 2 (3.1%) cases, this required angiography and endovascular embolization a. gastroduodenalis, in 1 (1.6%) case, endoscopic hemostasis was performed by filling the cavity with 4 ml of Hemoblock hemostatic solution. The combination of both methods was used in 1 (1.6%) patient. There was no evidence of recurrent bleeding in these patients. Lethal outcome occurred in 4 patients (6.3%). In 3 (4.7%) patients, the cause of death was multiple organ failure, in 1 (1.6%) - severe nosocomial pneumonia that developed on the 32nd day after drainage. Conclusions: 1. EUS is not only the most important method for diagnosing FC in AP, but also allows you to determine further tactics for their intraluminal drainage.2. Endoscopic intraluminal drainage of fluid zones in 45.8% of cases is the final minimally invasive method of surgical treatment of large-focal pancreatic necrosis. Disclosure: Nothing to disclose.Keywords: acute pancreatitis, fluid collection, endoscopy surgery, necrectomy, transluminal drainage
Procedia PDF Downloads 109660 Check Red Blood Cells Concentrations of a Blood Sample by Using Photoconductive Antenna
Authors: Ahmed Banda, Alaa Maghrabi, Aiman Fakieh
Abstract:
Terahertz (THz) range lies in the area between 0.1 to 10 THz. The process of generating and detecting THz can be done through different techniques. One of the most familiar techniques is done through a photoconductive antenna (PCA). The process of generating THz radiation at PCA includes applying a laser pump in femtosecond and DC voltage difference. However, photocurrent is generated at PCA, which its value is affected by different parameters (e.g., dielectric properties, DC voltage difference and incident power of laser pump). THz radiation is used for biomedical applications. However, different biomedical fields need new technologies to meet patients’ needs (e.g. blood-related conditions). In this work, a novel method to check the red blood cells (RBCs) concentration of a blood sample using PCA is presented. RBCs constitute 44% of total blood volume. RBCs contain Hemoglobin that transfers oxygen from lungs to body organs. Then it returns to the lungs carrying carbon dioxide, which the body then gets rid of in the process of exhalation. The configuration has been simulated and optimized using COMSOL Multiphysics. The differentiation of RBCs concentration affects its dielectric properties (e.g., the relative permittivity of RBCs in the blood sample). However, the effects of four blood samples (with different concentrations of RBCs) on photocurrent value have been tested. Photocurrent peak value and RBCs concentration are inversely proportional to each other due to the change of dielectric properties of RBCs. It was noticed that photocurrent peak value has dropped from 162.99 nA to 108.66 nA when RBCs concentration has risen from 0% to 100% of a blood sample. The optimization of this method helps to launch new products for diagnosing blood-related conditions (e.g., anemia and leukemia). The resultant electric field from DC components can not be used to count the RBCs of the blood sample.Keywords: biomedical applications, photoconductive antenna, photocurrent, red blood cells, THz radiation
Procedia PDF Downloads 204659 16s rRNA Based Metagenomic Analysis of Palm Sap Samples From Bangladesh
Authors: Ágota Ábrahám, Md Nurul Islam, Karimane Zeghbib, Gábor Kemenesi, Sazeda Akter
Abstract:
Collecting palm sap as a food source is an everyday practice in some parts of the world. However, the consumption of palm juice has been associated with regular infections and epidemics in parts of Bangladesh. This is attributed to fruit-eating bats and other vertebrates or invertebrates native to the area, contaminating the food with their body secretions during the collection process. The frequent intake of palm juice, whether as a processed food product or in its unprocessed form, is a common phenomenon in large areas. The range of pathogens suitable for human infection resulting from this practice is not yet fully understood. Additionally, the high sugar content of the liquid makes it an ideal culture medium for certain bacteria, which can easily propagate and potentially harm consumers. Rapid diagnostics, especially in remote locations, could mitigate health risks associated with palm juice consumption. The primary objective of this research is the rapid genomic detection and risk assessment of bacteria that may cause infections in humans through the consumption of palm juice. Utilizing state-of-the-art third-generation Nanopore metagenomic sequencing technology based on 16S rRNA, and identified bacteria primarily involved in fermenting processes. The swift metagenomic analysis, coupled with the widespread availability and portability of Nanopore products (including real-time analysis options), proves advantageous for detecting harmful pathogens in food sources without relying on extensive industry resources and testing.Keywords: raw date palm sap, NGS, metabarcoding, food safety
Procedia PDF Downloads 55658 Automatic Detection and Filtering of Negative Emotion-Bearing Contents from Social Media in Amharic Using Sentiment Analysis and Deep Learning Methods
Authors: Derejaw Lake Melie, Alemu Kumlachew Tegegne
Abstract:
The increasing prevalence of social media in Ethiopia has exacerbated societal challenges by fostering the proliferation of negative emotional posts and comments. Illicit use of social media has further exacerbated divisions among the population. Addressing these issues through manual identification and aggregation of emotions from millions of users for swift decision-making poses significant challenges, particularly given the rapid growth of Amharic language usage on social platforms. Consequently, there is a critical need to develop an intelligent system capable of automatically detecting and categorizing negative emotional content into social, religious, and political categories while also filtering out toxic online content. This paper aims to leverage sentiment analysis techniques to achieve automatic detection and filtering of negative emotional content from Amharic social media texts, employing a comparative study of deep learning algorithms. The study utilized a dataset comprising 29,962 comments collected from social media platforms using comment exporter software. Data pre-processing techniques were applied to enhance data quality, followed by the implementation of deep learning methods for training, testing, and evaluation. The results showed that CNN, GRU, LSTM, and Bi-LSTM classification models achieved accuracies of 83%, 50%, 84%, and 86%, respectively. Among these models, Bi-LSTM demonstrated the highest accuracy of 86% in the experiment.Keywords: negative emotion, emotion detection, social media filtering sentiment analysis, deep learning.
Procedia PDF Downloads 23657 Liquid Chromatography Microfluidics for Detection and Quantification of Urine Albumin Using Linear Regression Method
Authors: Patricia B. Cruz, Catrina Jean G. Valenzuela, Analyn N. Yumang
Abstract:
Nearly a hundred per million of the Filipino population is diagnosed with Chronic Kidney Disease (CKD). The early stage of CKD has no symptoms and can only be discovered once the patient undergoes urinalysis. Over the years, different methods were discovered and used for the quantification of the urinary albumin such as the immunochemical assays where most of these methods require large machinery that has a high cost in maintenance and resources, and a dipstick test which is yet to be proven and is still debated as a reliable method in detecting early stages of microalbuminuria. This research study involves the use of the liquid chromatography concept in microfluidic instruments with biosensor as a means of separation and detection respectively, and linear regression to quantify human urinary albumin. The researchers’ main objective was to create a miniature system that quantifies and detect patients’ urinary albumin while reducing the amount of volume used per five test samples. For this study, 30 urine samples of unknown albumin concentrations were tested using VITROS Analyzer and the microfluidic system for comparison. Based on the data shared by both methods, the actual vs. predicted regression were able to create a positive linear relationship with an R2 of 0.9995 and a linear equation of y = 1.09x + 0.07, indicating that the predicted values and actual values are approximately equal. Furthermore, the microfluidic instrument uses 75% less in total volume – sample and reagents combined, compared to the VITROS Analyzer per five test samples.Keywords: Chronic Kidney Disease, Linear Regression, Microfluidics, Urinary Albumin
Procedia PDF Downloads 136656 A study on Structural analysis of Out-of-Sequence Thrust along Sutlej River Valley (Jhakri-Wangtu section) Himachal Pradesh Higher Himalaya, India
Authors: Rajkumar Ghosh
Abstract:
The Sutlej River Valley in Himachal Pradesh, India, is home to four Out-of-Sequence Thrusts (OOST) in the Higher Himalaya. These OOSTs include Jhakri Thrust (JT), Sarahan Thrust (ST), Chaura Thrust (CT), and Jeori Dislocation (JD). The study focuses on the rock types of these OOSTs, including ductile sheared gneisses and upper greenschist-amphibolite facies metamorphosed schists. Microstructural tests reveal a progressive increase in strain approaching the Jakhri thrust zone, with temperatures increasing from 400 to 750°C. The Chaura Thrust is assumed to be folded with this anticlinorium, with various branches that make up the thrust system. Fieldwork and microstructural research have revealed the following: (a) initial top-to-SW sense of ductile shearing (Chaura thrust); (b) brittle-ductile extension (Jeori Dislocation); and (c) uniform top-to-SW sense of brittle shearing (Jhakri thrust). Samples of Rampur Quartzite from the Rampur Group of Lesser Himalayan Crystalline and schistose rock from the Jutogh Group of Greater Himalayan Crystalline were examined.The study emphasizes the value of microscopic research in detecting different types of crenulated schistosity and documenting mylonitized zones. The paper explains the field evidence for the OOST and comes to the conclusion that the Chaura Thrust is not a blind thrust. The paper describes the box fold and its characteristics in the Himachal Himalayan regional geology.Keywords: Out-of-sequence thrust (OOST), jakhri thrust (JT), sarahan thrust (ST), chaura thrust (CT), jeori dislocation (JD)
Procedia PDF Downloads 81655 New Off-Line SPE-GC-MS/MS Method for Determination of Mineral Oil Saturated Hydrocarbons/Mineral Oil Hydrocarbons in Animal Feed, Foods, Infant Formula and Vegetable Oils
Authors: Ovanes Chakoyan
Abstract:
MOH (mineral oil hydrocarbons), which consist of mineral oil saturated hydrocarbons(MOSH) and mineral oil aromatic hydrocarbons(MOAH), are present in various products such as vegetable oils, animal feed, foods, and infant formula. Contamination of foods with mineral oil hydrocarbons, particularly mineral oil aromatic hydrocarbons(MOAH), exhibiting carcinogenic, mutagenic, and hormone-disruptive effects. Identifying toxic substances among the many thousands comprising mineral oils in food samples is a difficult analytical challenge. A method based on an offline-solid phase extraction approach coupled with gas chromatography-triple quadrupole(GC-MS/MS) was developed for the determination of MOSH/MOAH in various products such as vegetable oils, animal feed, foods, and infant formula. A glass solid phase extraction cartridge loaded with 7 g of activated silica gel impregnated with 10 % silver nitrate for removal of olefins and lipids. The MOSH/MOAH fractions were eluated with hexane and hexane: dichloromethane : toluene, respectively. Each eluate was concentrated to 50 µl in toluene and injected on splitless mode into GC-MS/MS. Accuracy of the method was estimated as measurement of recovery of spiked oil samples at 2.0, 15.0, and 30.0 mg kg -1, and recoveries varied from 85 to 105 %. The method was applied to the different types of samples (sunflower meal, chocolate ships, santa milk chocolate, biscuits, infant milk, cornflakes, refined sunflower oil, crude sunflower oil), detecting MOSH up to 56 mg/kg and MOAH up to 5 mg/kg. The limit of quantification(LOQ) of the proposed method was estimated at 0.5 mg/kg and 0.3 mg/kg for MOSH and MOAH, respectively.Keywords: MOSH, MOAH, GC-MS/MS, foods, solid phase extraction
Procedia PDF Downloads 86654 Anesthetic Considerations for Carotid Endarterectomy: Prospective Study Based on Clinical Trials
Authors: Ahmed Yousef A. Al Sultan
Abstract:
Introduction: The aim of this review is based on clinical research that studies the changes in middle cerebral artery velocity using Transcranial Doppler (TCD) and cerebral oxygen saturation using cerebral oximetry in patients undergoing carotid endarterectomy (CEA) surgery under local anesthesia (LA). Patients with or without neurological symptoms during the surgery are taking a role in this study using triplet method of cerebral oximetry, transcranial doppler and awake test in detecting any cerebral ischemic symptoms. Methods: about one hundred patients took part during their CEA surgeries under local anesthesia, using triple assessment mentioned method, Patients requiring general anesthesia be excluded from analysis. All data were recorded at eight surgery stages separately to serve this study. Results: In total regional cerebral oxygen saturation (rSO2), middle cerebral artery (MCA) velocity, and pulsatility index were significantly decreased during carotid artery clamping step in CEA procedures on the targeted carotid side. With most observed changes in MCA velocity during the study. Discussion: Cerebral oxygen saturation and middle cerebral artery velocity were significantly decreased during clamping step of the procedures on the targeted side. The team with neurological symptoms during the procedures showed higher changes of rSO2 and MCA velocity than the team without neurological symptoms. Cerebral rSO2 and MCA velocity significantly increased directly after de-clamping of the internal carotid artery on the affected side.Keywords: awake testing, carotid endarterectomy, cerebral oximetry, Tanscranial Doppler
Procedia PDF Downloads 169653 Electronic Device Robustness against Electrostatic Discharges
Authors: Clara Oliver, Oibar Martinez
Abstract:
This paper is intended to reveal the severity of electrostatic discharge (ESD) effects in electronic and optoelectronic devices by performing sensitivity tests based on Human Body Model (HBM) standard. We explain here the HBM standard in detail together with the typical failure modes associated with electrostatic discharges. In addition, a prototype of electrostatic charge generator has been designed, fabricated, and verified to stress electronic devices, which features a compact high voltage source. This prototype is inexpensive and enables one to do a battery of pre-compliance tests aimed at detecting unexpected weaknesses to static discharges at the component level. Some tests with different devices were performed to illustrate the behavior of the proposed generator. A set of discharges was applied according to the HBM standard to commercially available bipolar transistors, complementary metal-oxide-semiconductor transistors and light emitting diodes. It is observed that high current and voltage ratings in electronic devices not necessarily provide a guarantee that the device will withstand high levels of electrostatic discharges. We have also compared the result obtained by performing the sensitivity tests based on HBM with a real discharge generated by a human. For this purpose, the charge accumulated in the person is monitored, and a direct discharge against the devices is generated by touching them. Every test has been performed under controlled relative humidity conditions. It is believed that this paper can be of interest for research teams involved in the development of electronic and optoelectronic devices which need to verify the reliability of their devices in terms of robustness to electrostatic discharges.Keywords: human body model, electrostatic discharge, sensitivity tests, static charge monitoring
Procedia PDF Downloads 149652 Fake Accounts Detection in Twitter Based on Minimum Weighted Feature Set
Authors: Ahmed ElAzab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny
Abstract:
Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, then the determined factors have been applied using different classification techniques, a comparison of the results for these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent research in the same area, this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts, moreover, the study can be applied on different Social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.Keywords: fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques
Procedia PDF Downloads 416651 The Effects of Modern Materials on the Moisture Resistance Performance of Architectural Buildings
Authors: Leyli Hashemi Rafsanjani, Hoda Mortazavi Alavi, Amirhossein Habibzadeh
Abstract:
At present, the atmospheric and environmental factors impose massive damages to buildings. Thus, to reduce these damages, researchers pay more attention on qualitative and quantitative characteristic of buildings materials. Condensation is one of the problems in Contemporary Sustecture Design. It could cause serious damages to the frontage, interior and structural elements of buildings. As a result, taking preventative steps to avoid condensation from occurring in buildings will help prevent avoidable and costly problems in the future. Hence, the aim of this paper is to answer the question: “Does the use of advanced materials cause the reduction of condensation formed on the walls?" In response to those flaws, this paper considered similar articles and selected 20 buildings randomly from contemporary architecture of developing countries which have been built in recent decade from 2002 to 2012, to find out the mutual relation between the usage of advanced materials and level of condensation damages. This consideration shows that by using advanced materials, we will have fewer damages.Keywords: condensation, advanced materials, contemporary sustecture, moisture
Procedia PDF Downloads 322650 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation
Authors: Somayeh Komeylian
Abstract:
The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE
Procedia PDF Downloads 100649 Performance the SOFA and APACHEII Scoring System to Predicate the Mortality of the ICU Cases
Authors: Yu-Chuan Huang
Abstract:
Introduction: There is a higher mortality rate for unplanned transfer to intensive care units. It also needs a longer length of stay and makes the intensive care unit beds cannot be effectively used. It affects the immediate medical treatment of critically ill patients, resulting in a drop in the quality of medical care. Purpose: The purpose of this study was using SOFA and APACHEII score to analyze the mortality rate of the cases transferred from ED to ICU. According to the score that should be provide an appropriate care as early as possible. Methods: This study was a descriptive experimental design. The sample size was estimated at 220 to reach a power of 0.8 for detecting a medium effect size of 0.30, with a 0.05 significance level, using G-power. Considering an estimated follow-up loss, the required sample size was estimated as 242 participants. Data were calculated by medical system of SOFA and APACHEII score that cases transferred from ED to ICU in 2016. Results: There were 233 participants meet the study. The medical records showed 33 participants’ mortality. Age and sex with QSOFA , SOFA and sex with APACHEII showed p>0.05. Age with APCHHII in ED and ICU showed r=0.150, 0,268 (p < 0.001**). The score with mortality risk showed: ED QSOFA is r=0.235 (p < 0.001**), exp(B)=1.685(p = 0.007); ICU SOFA 0.78 (p < 0.001**), exp(B)=1.205(p < 0.001). APACHII in ED and ICU showed r= 0.253, 0.286 (p < 0.001**), exp(B) = 1.041,1.073(p = 0.017,0.001). For SOFA, a cutoff score of above 15 points was identified as a predictor of the 95% mortality risk. Conclusions: The SOFA and APACHE II were calculated based on initial laboratory data in the Emergency Department, and during the first 24 hours of ICU admission. In conclusion, the SOFA and APACHII score is significantly associated with mortality and strongly predicting mortality. Early predictors of morbidity and mortality, which we can according the predicting score, and provide patients with a detail assessment and proper care, thereby reducing mortality and length of stay.Keywords: SOFA, APACHEII, mortality, ICU
Procedia PDF Downloads 147648 Design and Modeling of Amphibious Houses for Flood Prone Areas: The Case of Nigeria
Authors: Onyebuchi Mogbo, Abdulsalam Mohammed, Salsabila Wali
Abstract:
This research discusses the design and modeling of an amphibious building. The amphibious building is a house with the function of floating during a flood event. Over the years, houses have been built to resist flood events some of which have failed. The floating house is designed to work with nature and not against it. In the event of a flood, the house will rise with the increasing water level and protect the house from sinking. For the design and modeling of this house an estimated cost of N250, 000, approximately $700, will be needed. It is expected that the house will rise when lightweight materials are incorporated in the design, and the concrete dock (in form of a hollow box) carrying the entire house in its hollow space is well designed. When there is flooding the water will fill up the concrete dock, and the house will rise upwards with vertical guides preventing it from moving side to side or out of its boundary. Architectural and Structural designs will be used in this project.Keywords: amphibious building, flood, housing, design and modelling
Procedia PDF Downloads 180