Search results for: icon recognition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1712

Search results for: icon recognition

1202 Machine Learning Strategies for Data Extraction from Unstructured Documents in Financial Services

Authors: Delphine Vendryes, Dushyanth Sekhar, Baojia Tong, Matthew Theisen, Chester Curme

Abstract:

Much of the data that inform the decisions of governments, corporations and individuals are harvested from unstructured documents. Data extraction is defined here as a process that turns non-machine-readable information into a machine-readable format that can be stored, for instance, in a database. In financial services, introducing more automation in data extraction pipelines is a major challenge. Information sought by financial data consumers is often buried within vast bodies of unstructured documents, which have historically required thorough manual extraction. Automated solutions provide faster access to non-machine-readable datasets, in a context where untimely information quickly becomes irrelevant. Data quality standards cannot be compromised, so automation requires high data integrity. This multifaceted task is broken down into smaller steps: ingestion, table parsing (detection and structure recognition), text analysis (entity detection and disambiguation), schema-based record extraction, user feedback incorporation. Selected intermediary steps are phrased as machine learning problems. Solutions leveraging cutting-edge approaches from the fields of computer vision (e.g. table detection) and natural language processing (e.g. entity detection and disambiguation) are proposed.

Keywords: computer vision, entity recognition, finance, information retrieval, machine learning, natural language processing

Procedia PDF Downloads 113
1201 Automatic Reporting System for Transcriptome Indel Identification and Annotation Based on Snapshot of Next-Generation Sequencing Reads Alignment

Authors: Shuo Mu, Guangzhi Jiang, Jinsa Chen

Abstract:

The analysis of Indel for RNA sequencing of clinical samples is easily affected by sequencing experiment errors and software selection. In order to improve the efficiency and accuracy of analysis, we developed an automatic reporting system for Indel recognition and annotation based on image snapshot of transcriptome reads alignment. This system includes sequence local-assembly and realignment, target point snapshot, and image-based recognition processes. We integrated high-confidence Indel dataset from several known databases as a training set to improve the accuracy of image processing and added a bioinformatical processing module to annotate and filter Indel artifacts. Subsequently, the system will automatically generate data, including data quality levels and images results report. Sanger sequencing verification of the reference Indel mutation of cell line NA12878 showed that the process can achieve 83% sensitivity and 96% specificity. Analysis of the collected clinical samples showed that the interpretation accuracy of the process was equivalent to that of manual inspection, and the processing efficiency showed a significant improvement. This work shows the feasibility of accurate Indel analysis of clinical next-generation sequencing (NGS) transcriptome. This result may be useful for RNA study for clinical samples with microsatellite instability in immunotherapy in the future.

Keywords: automatic reporting, indel, next-generation sequencing, NGS, transcriptome

Procedia PDF Downloads 191
1200 The Application of AI in Developing Assistive Technologies for Non-Verbal Individuals with Autism

Authors: Ferah Tesfaye Admasu

Abstract:

Autism Spectrum Disorder (ASD) often presents significant communication challenges, particularly for non-verbal individuals who struggle to express their needs and emotions effectively. Assistive technologies (AT) have emerged as vital tools in enhancing communication abilities for this population. Recent advancements in artificial intelligence (AI) hold the potential to revolutionize the design and functionality of these technologies. This study explores the application of AI in developing intelligent, adaptive, and user-centered assistive technologies for non-verbal individuals with autism. Through a review of current AI-driven tools, including speech-generating devices, predictive text systems, and emotion-recognition software, this research investigates how AI can bridge communication gaps, improve engagement, and support independence. Machine learning algorithms, natural language processing (NLP), and facial recognition technologies are examined as core components in creating more personalized and responsive communication aids. The study also discusses the challenges and ethical considerations involved in deploying AI-based AT, such as data privacy and the risk of over-reliance on technology. Findings suggest that integrating AI into assistive technologies can significantly enhance the quality of life for non-verbal individuals with autism, providing them with greater opportunities for social interaction and participation in daily activities. However, continued research and development are needed to ensure these technologies are accessible, affordable, and culturally sensitive.

Keywords: artificial intelligence, autism spectrum disorder, non-verbal communication, assistive technology, machine learning

Procedia PDF Downloads 19
1199 Ionophore-Based Materials for Selective Optical Sensing of Iron(III)

Authors: Natalia Lukasik, Ewa Wagner-Wysiecka

Abstract:

Development of selective, fast-responsive, and economical sensors for diverse ions detection and determination is one of the most extensively studied areas due to its importance in the field of clinical, environmental and industrial analysis. Among chemical sensors, vast popularity has gained ionophore-based optical sensors, where the generated analytical signal is a consequence of the molecular recognition of ion by the ionophore. Change of color occurring during host-guest interactions allows for quantitative analysis and for 'naked-eye' detection without the need of using sophisticated equipment. An example of application of such sensors is colorimetric detection of iron(III) cations. Iron as one of the most significant trace elements plays roles in many biochemical processes. For these reasons, the development of reliable, fast, and selective methods of iron ions determination is highly demanded. Taking all mentioned above into account a chromogenic amide derivative of 3,4-dihydroxybenzoic acid was synthesized, and its ability to iron(III) recognition was tested. To the best of authors knowledge (according to chemical abstracts) the obtained ligand has not been described in the literature so far. The catechol moiety was introduced to the ligand structure in order to mimic the action of naturally occurring siderophores-iron(III)-selective receptors. The ligand–ion interactions were studied using spectroscopic methods: UV-Vis spectrophotometry and infrared spectroscopy. The spectrophotometric measurements revealed that the amide exhibits affinity to iron(III) in dimethyl sulfoxide and fully aqueous solution, what is manifested by the change of color from yellow to green. Incorporation of the tested amide into a polymeric matrix (cellulose triacetate) ensured effective recognition of iron(III) at pH 3 with the detection limit 1.58×10⁻⁵ M. For the obtained sensor material parameters like linear response range, response time, selectivity, and possibility of regeneration were determined. In order to evaluate the effect of the size of the sensing material on iron(III) detection nanospheres (in the form of nanoemulsion) containing the tested amide were also prepared. According to DLS (dynamic light scattering) measurements, the size of the nanospheres is 308.02 ± 0.67 nm. Work parameters of the nanospheres were determined and compared with cellulose triacetate-based material. Additionally, for fast, qualitative experiments the test strips were prepared by adsorption of the amide solution on a glass microfiber material. Visual limit of detection of iron(III) at pH 3 by the test strips was estimated at the level 10⁻⁴ M. In conclusion, reported here amide derived from 3,4- dihydroxybenzoic acid proved to be an effective candidate for optical sensing of iron(III) in fully aqueous solutions. N. L. kindly acknowledges financial support from National Science Centre Poland the grant no. 2017/01/X/ST4/01680. Authors thank for financial support from Gdansk University of Technology grant no. 032406.

Keywords: ion-selective optode, iron(III) recognition, nanospheres, optical sensor

Procedia PDF Downloads 154
1198 The Staphylococcus aureus Exotoxin Recognition Using Nanobiosensor Designed by an Antibody-Attached Nanosilica Method

Authors: Hamed Ahari, Behrouz Akbari Adreghani, Vadood Razavilar, Amirali Anvar, Sima Moradi, Hourieh Shalchi

Abstract:

Considering the ever increasing population and industrialization of the developmental trend of humankind's life, we are no longer able to detect the toxins produced in food products using the traditional techniques. This is due to the fact that the isolation time for food products is not cost-effective and even in most of the cases, the precision in the practical techniques like the bacterial cultivation and other techniques suffer from operator errors or the errors of the mixtures used. Hence with the advent of nanotechnology, the design of selective and smart sensors is one of the greatest industrial revelations of the quality control of food products that in few minutes time, and with a very high precision can identify the volume and toxicity of the bacteria. Methods and Materials: In this technique, based on the bacterial antibody connection to nanoparticle, a sensor was used. In this part of the research, as the basis for absorption for the recognition of bacterial toxin, medium sized silica nanoparticles of 10 nanometer in form of solid powder were utilized with Notrino brand. Then the suspension produced from agent-linked nanosilica which was connected to bacterial antibody was positioned near the samples of distilled water, which were contaminated with Staphylococcus aureus bacterial toxin with the density of 10-3, so that in case any toxin exists in the sample, a connection between toxin antigen and antibody would be formed. Finally, the light absorption related to the connection of antigen to the particle attached antibody was measured using spectrophotometry. The gene of 23S rRNA that is conserved in all Staphylococcus spp., also used as control. The accuracy of the test was monitored by using serial dilution (l0-6) of overnight cell culture of Staphylococcus spp., bacteria (OD600: 0.02 = 107 cell). It showed that the sensitivity of PCR is 10 bacteria per ml of cells within few hours. Result: The results indicate that the sensor detects up to 10-4 density. Additionally, the sensitivity of the sensors was examined after 60 days, the sensor by the 56 days had confirmatory results and started to decrease after those time periods. Conclusions: Comparing practical nano biosensory to conventional methods like that culture and biotechnology methods(such as polymerase chain reaction) is accuracy, sensitiveness and being unique. In the other way, they reduce the time from the hours to the 30 minutes.

Keywords: exotoxin, nanobiosensor, recognition, Staphylococcus aureus

Procedia PDF Downloads 385
1197 [Keynote Talk]: sEMG Interface Design for Locomotion Identification

Authors: Rohit Gupta, Ravinder Agarwal

Abstract:

Surface electromyographic (sEMG) signal has the potential to identify the human activities and intention. This potential is further exploited to control the artificial limbs using the sEMG signal from residual limbs of amputees. The paper deals with the development of multichannel cost efficient sEMG signal interface for research application, along with evaluation of proposed class dependent statistical approach of the feature selection method. The sEMG signal acquisition interface was developed using ADS1298 of Texas Instruments, which is a front-end interface integrated circuit for ECG application. Further, the sEMG signal is recorded from two lower limb muscles for three locomotions namely: Plane Walk (PW), Stair Ascending (SA), Stair Descending (SD). A class dependent statistical approach is proposed for feature selection and also its performance is compared with 12 preexisting feature vectors. To make the study more extensive, performance of five different types of classifiers are compared. The outcome of the current piece of work proves the suitability of the proposed feature selection algorithm for locomotion recognition, as compared to other existing feature vectors. The SVM Classifier is found as the outperformed classifier among compared classifiers with an average recognition accuracy of 97.40%. Feature vector selection emerges as the most dominant factor affecting the classification performance as it holds 51.51% of the total variance in classification accuracy. The results demonstrate the potentials of the developed sEMG signal acquisition interface along with the proposed feature selection algorithm.

Keywords: classifiers, feature selection, locomotion, sEMG

Procedia PDF Downloads 293
1196 Like a Bridge over Troubled Waters: The Value of Joint Learning Programs in Intergroup Identity-Based Conflict in Israel

Authors: Rachelly Ashwall, Ephraim Tabory

Abstract:

In an attempt to reduce the level of a major identity-based conflict in Israel between Ultra-orthodox and secular Jews, several initiatives in recent years have tried to bring members of the two societies together in facilitated joint discussion forums. Our study analyzes the impact of two types of such programs: joint mediation training classes and confrontation-based learning programs that are designed to facilitate discussions over controversial issues. These issues include claims about an unequal shouldering of national obligations such as military service, laws requiring public observance of the Sabbath, and discrimination against women, among others. The study examines the factors that enabled the two groups to reduce their social distance, and increase their understanding of each other, and develop a recognition and tolerance of the other group's particular social identity. The research conducted over a course of two years involved observations of the activities of the groups, interviews with the participants, and analysis of the social media used by the groups. The findings demonstrate the progression from a mutual initial lack of knowledge about habits, norms, and attitudes of the out-group to an increasing desire to know, understand and more readily accept the identity of a previously rejected outsider. Participants manifested more respect, concern for and even affection for those whose identity initially led them to reject them out of hand. We discuss the implications for seemingly intractable identity-based conflict in fragile societies.

Keywords: identity-based conflict, intergroup relations, joint mediation learning, out-group recognition, social identity

Procedia PDF Downloads 252
1195 Issues of Accounting of Lease and Revenue according to International Financial Reporting Standards

Authors: Nadezhda Kvatashidze, Elena Kharabadze

Abstract:

It is broadly known that lease is a flexible means of funding enterprises. Lease reduces the risk related to access and possession of assets, as well as obtainment of funding. Therefore, it is important to refine lease accounting. The lease accounting regulations under the applicable standard (International Accounting Standards 17) make concealment of liabilities possible. As a result, the information users get inaccurate and incomprehensive information and have to resort to an additional assessment of the off-balance sheet lease liabilities. In order to address the problem, the International Financial Reporting Standards Board decided to change the approach to lease accounting. With the deficiencies of the applicable standard taken into account, the new standard (IFRS 16 ‘Leases’) aims at supplying appropriate and fair lease-related information to the users. Save certain exclusions; the lessee is obliged to recognize all the lease agreements in its financial report. The approach was determined by the fact that under the lease agreement, rights and obligations arise by way of assets and liabilities. Immediately upon conclusion of the lease agreement, the lessee takes an asset into its disposal and assumes the obligation to effect the lease-related payments in order to meet the recognition criteria defined by the Conceptual Framework for Financial Reporting. The payments are to be entered into the financial report. The new lease accounting standard secures supply of quality and comparable information to the financial information users. The International Accounting Standards Board and the US Financial Accounting Standards Board jointly developed IFRS 15: ‘Revenue from Contracts with Customers’. The standard allows the establishment of detailed revenue recognition practical criteria such as identification of the performance obligations in the contract, determination of the transaction price and its components, especially price variable considerations and other important components, as well as passage of control over the asset to the customer. IFRS 15: ‘Revenue from Contracts with Customers’ is very similar to the relevant US standards and includes requirements more specific and consistent than those of the standards in place. The new standard is going to change the recognition terms and techniques in the industries, such as construction, telecommunications (mobile and cable networks), licensing (media, science, franchising), real property, software etc.

Keywords: assessment of the lease assets and liabilities, contractual liability, division of contract, identification of contracts, contract price, lease identification, lease liabilities, off-balance sheet, transaction value

Procedia PDF Downloads 320
1194 An Efficient Aptamer-Based Biosensor Developed via Irreversible Pi-Pi Functionalisation of Graphene/Zinc Oxide Nanocomposite

Authors: Sze Shin Low, Michelle T. T. Tan, Poi Sim Khiew, Hwei-San Loh

Abstract:

An efficient graphene/zinc oxide (PSE-G/ZnO) platform based on pi-pi stacking, non-covalent interactions for the development of aptamer-based biosensor was presented in this study. As a proof of concept, the DNA recognition capability of the as-developed PSE-G/ZnO enhanced aptamer-based biosensor was evaluated using Coconut Cadang-cadang viroid disease (CCCVd). The G/ZnO nanocomposite was synthesised via a simple, green and efficient approach. The pristine graphene was produced through a single step exfoliation of graphite in sonochemical alcohol-water treatment while the zinc nitrate hexahydrate was mixed with the graphene and subjected to low temperature hydrothermal growth. The developed facile, environmental friendly method provided safer synthesis procedure by eliminating the need of harsh reducing chemicals and high temperature. The as-prepared nanocomposite was characterised by X-ray diffractometry (XRD), scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS) to evaluate its crystallinity, morphology and purity. Electrochemical impedance spectroscopy (EIS) was employed for the detection of CCCVd sequence with the use of potassium ferricyanide (K3[Fe(CN)6]). Recognition of the RNA analytes was achieved via the significant increase in resistivity for the double stranded DNA, as compared to single-stranded DNA. The PSE-G/ZnO enhanced aptamer-based biosensor exhibited higher sensitivity than the bare biosensor, attributing to the synergistic effect of high electrical conductivity of graphene and good electroactive property of ZnO.

Keywords: aptamer-based biosensor, graphene/zinc oxide nanocomposite, green synthesis, screen printed carbon electrode

Procedia PDF Downloads 370
1193 Image Recognition Performance Benchmarking for Edge Computing Using Small Visual Processing Unit

Authors: Kasidis Chomrat, Nopasit Chakpitak, Anukul Tamprasirt, Annop Thananchana

Abstract:

Internet of Things devices or IoT and Edge Computing has become one of the biggest things happening in innovations and one of the most discussed of the potential to improve and disrupt traditional business and industry alike. With rises of new hang cliff challenges like COVID-19 pandemic that posed a danger to workforce and business process of the system. Along with drastically changing landscape in business that left ruined aftermath of global COVID-19 pandemic, looming with the threat of global energy crisis, global warming, more heating global politic that posed a threat to become new Cold War. How emerging technology like edge computing and usage of specialized design visual processing units will be great opportunities for business. The literature reviewed on how the internet of things and disruptive wave will affect business, which explains is how all these new events is an effect on the current business and how would the business need to be adapting to change in the market and world, and example test benchmarking for consumer marketed of newer devices like the internet of things devices equipped with new edge computing devices will be increase efficiency and reducing posing a risk from a current and looming crisis. Throughout the whole paper, we will explain the technologies that lead the present technologies and the current situation why these technologies will be innovations that change the traditional practice through brief introductions to the technologies such as cloud computing, edge computing, Internet of Things and how it will be leading into future.

Keywords: internet of things, edge computing, machine learning, pattern recognition, image classification

Procedia PDF Downloads 155
1192 Statistical Feature Extraction Method for Wood Species Recognition System

Authors: Mohd Iz'aan Paiz Bin Zamri, Anis Salwa Mohd Khairuddin, Norrima Mokhtar, Rubiyah Yusof

Abstract:

Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.

Keywords: classification, feature extraction, fuzzy, inspection system, image analysis, macroscopic images

Procedia PDF Downloads 426
1191 Supernatural Beliefs Impact Pattern Perception

Authors: Silvia Boschetti, Jakub Binter, Robin Kopecký, Lenka PříPlatová, Jaroslav Flegr

Abstract:

A strict dichotomy was present between religion and science, but recently, cognitive science focusses on the impact of supernatural beliefs on cognitive processes such as pattern recognition. It has been hypothesized that cognitive and perceptual processes have been under evolutionary pressures that ensured amplified perception of patterns, especially when in stressful and harsh conditions. The pattern detection in religious and non-religious individuals after induction of negative, anxious mood shall constitute a cornerstone of the general role of anxiety, cognitive bias, leading towards or against the by-product hypothesis, one of the main theories on the evolutionary studies of religion. The apophenia (tendencies to perceive connection and meaning on unrelated events) and perception of visual patterns (or pateidolia) are of utmost interest. To capture the impact of culture and upbringing, a comparative study of two European countries, the Czech Republic (low organized religion participation, high esoteric belief) and Italy (high organized religion participation, low esoteric belief), are currently in the data collection phase. Outcomes will be presented at the conference. A battery of standardized questionnaires followed by pattern recognition tasks (the patterns involve color, shape, and are of artificial and natural origin) using an experimental method involving the conditioning of (controlled, laboratory-induced) stress is taking place. We hypothesize to find a difference between organized religious belief and personal (esoteric) belief that will be alike in both of the cultural environments.

Keywords: culture, esoteric belief, pattern perception, religiosity

Procedia PDF Downloads 187
1190 Map UI Design of IoT Application Based on Passenger Evacuation Behaviors in Underground Station

Authors: Meng-Cong Zheng

Abstract:

When the public space is in an emergency, how to quickly establish spatial cognition and emergency shelter in the closed underground space is the urgent task. This study takes Taipei Station as the research base and aims to apply the use of Internet of things (IoT) application for underground evacuation mobility design. The first experiment identified passengers' evacuation behaviors and spatial cognition in underground spaces by wayfinding tasks and thinking aloud, then defined the design conditions of User Interface (UI) and proposed the UI design.  The second experiment evaluated the UI design based on passengers' evacuation behaviors by wayfinding tasks and think aloud again as same as the first experiment. The first experiment found that the design conditions that the subjects were most concerned about were "map" and hoping to learn the relative position of themselves with other landmarks by the map and watch the overall route. "Position" needs to be accurately labeled to determine the location in underground space. Each step of the escape instructions should be presented clearly in "navigation bar." The "message bar" should be informed of the next or final target exit. In the second experiment with the UI design, we found that the "spatial map" distinguishing between walking and non-walking areas with shades of color is useful. The addition of 2.5D maps of the UI design increased the user's perception of space. Amending the color of the corner diagram in the "escape route" also reduces the confusion between the symbol and other diagrams. The larger volume of toilets and elevators can be a judgment of users' relative location in "Hardware facilities." Fire extinguisher icon should be highlighted. "Fire point tips" of the UI design indicated fire with a graphical fireball can convey precise information to the escaped person. "Fire point tips" of the UI design indicated fire with a graphical fireball can convey precise information to the escaped person. However, "Compass and return to present location" are less used in underground space.

Keywords: evacuation behaviors, IoT application, map UI design, underground station

Procedia PDF Downloads 207
1189 Just Not Seeing It: Exploring the Relationship between Inattention Blindness and Banner Blindness

Authors: Carie Cunningham, Krsiten Lynch

Abstract:

Despite a viewer’s thought that they may be paying attention, many times they are missing out on their surrounds-- a phenomenon referred to as inattentional blindness. Inattention blindness refers to the failure of an individual to orient their attention to a particular item in their visual field. This well-defined in the psychology literature. Similarly, this phenomenon has been evaluated in media types in advertising. In advertising, not comprehending/remembering items in one’s field of vision is known as banner blindness. On the other hand, banner blindness is a phenomenon that occurs when individuals habitually see a banner in a specific area on a webpage, and thus condition themselves to ignore those habitual areas. Another reason that individuals avoid these habitual areas (usually on the top or sides of a webpage) is due to the lack of personal relevance or pertinent information to the viewer. Banner blindness, while a web-based concept, may also relate this inattention blindness. This paper is proposing an analysis of the true similarities and differences between these concepts bridging the two dimensions of thinking together. Forty participants participated in an eye-tracking and post-survey experiment to test attention and memory measures in both a banner blindness and inattention blindness condition. The two conditions were conducted between subjects semi-randomized order. Half of participants were told to search through the content ignoring the advertising banners; the other half of participants were first told to search through the content ignoring the distractor icon. These groups were switched after 5 trials and then 5 more trials were completed. In review of the literature, sustainability communication was found to have many inconsistencies with message production and viewer awareness. For the purpose of this study, we used advertising materials as stimuli. Results suggest that there are gaps between the two concepts and that more research should be done testing these effects in a real world setting versus an online environment. This contributes to theory by exploring the overlapping concepts—inattention blindness and banner blindness and providing the advertising industry with support that viewers can still fall victim to ignoring items in their field of view even if not consciously, which will impact message development.

Keywords: attention, banner blindness, eye movement, inattention blindness

Procedia PDF Downloads 275
1188 Omni-Modeler: Dynamic Learning for Pedestrian Redetection

Authors: Michael Karnes, Alper Yilmaz

Abstract:

This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.

Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition

Procedia PDF Downloads 76
1187 Recognition of Arrest Patients and Application of Basic Life Support by Bystanders in the Field

Authors: Behcet Al, Mehmet Murat Oktay, Suat Zengin, Mustafa Sabak, Cuma Yildirim

Abstract:

Objective: Th Recognition of arrest patients and application of basic life support (BLS) by bystanders in the field and the activation of emergency serves were evaluated in present study. Methodology: The present study was carried out by Emergency Department of Medicine Faculty of Gaziantep University at 33 of Emergency Health center in Gaziantep between December 2012- April 2014 prospectively. Of 539 arrested patients, 171 patients were included in study. Results: 118 (69%) male, and 53 31(%) female with a totlay of 171 patients were included in this study. Of patients, 32.2% had syncope and 24% had shorth breathing just befor being arrested. The majority of arrest cases had occured at home (61.4%) and rural area (11.7%) respectively. Of asking help, %48.5 were constructed by family members. Of announcement, only 15.2% occured within first minute of arrest. The BLS ratio that was applied by bystanders was 22.2%. Of bystanders, 47.4% had a course experience of BLS. The emergency serve had reached to the field with a mean of 8.43 min. Of cases, 55% (n=94) were evaluated as exitus firstly bu emergency staff. The most noticed rythim was asystol (73.1%). BLS and advanced life support (ALS) were applied to 98.8% and 60% respectively at the field. 10.5% (n=18) of cases were defibrilated, and 45 (26.3%) were intubated endotrecealy. The majority (48.5%) of staff who applied BLS and ALS at the fied were emergency medicine technicians. CPR was performed to 86.5% (n=148) cases in ambulance while they were transported. The mean arrival time to mergency department was 9.13 min. When the patients arrived to ED 15.2% needed defirlitation. 91.2% (n =156) of patients resulted in exitus in ED. 15 (8.8%) patients were discharged (9 with recovery, six patients with damage). Conclusion: The ratio of inntervention for arrest patients by bystanders is still low. To optain a high percentage of survival, BLS training should be widened among the puplic especiallyamong the caregivers.

Keywords: arrest patients, cardiopulmonary resuscitation, bystanders, chest compressions, prehospital

Procedia PDF Downloads 389
1186 Improvement of Microscopic Detection of Acid-Fast Bacilli for Tuberculosis by Artificial Intelligence-Assisted Microscopic Platform and Medical Image Recognition System

Authors: Hsiao-Chuan Huang, King-Lung Kuo, Mei-Hsin Lo, Hsiao-Yun Chou, Yusen Lin

Abstract:

The most robust and economical method for laboratory diagnosis of TB is to identify mycobacterial bacilli (AFB) under acid-fast staining despite its disadvantages of low sensitivity and labor-intensive. Though digital pathology becomes popular in medicine, an automated microscopic system for microbiology is still not available. A new AI-assisted automated microscopic system, consisting of a microscopic scanner and recognition program powered by big data and deep learning, may significantly increase the sensitivity of TB smear microscopy. Thus, the objective is to evaluate such an automatic system for the identification of AFB. A total of 5,930 smears was enrolled for this study. An intelligent microscope system (TB-Scan, Wellgen Medical, Taiwan) was used for microscopic image scanning and AFB detection. 272 AFB smears were used for transfer learning to increase the accuracy. Referee medical technicians were used as Gold Standard for result discrepancy. Results showed that, under a total of 1726 AFB smears, the automated system's accuracy, sensitivity and specificity were 95.6% (1,650/1,726), 87.7% (57/65), and 95.9% (1,593/1,661), respectively. Compared to culture, the sensitivity for human technicians was only 33.8% (38/142); however, the automated system can achieve 74.6% (106/142), which is significantly higher than human technicians, and this is the first of such an automated microscope system for TB smear testing in a controlled trial. This automated system could achieve higher TB smear sensitivity and laboratory efficiency and may complement molecular methods (eg. GeneXpert) to reduce the total cost for TB control. Furthermore, such an automated system is capable of remote access by the internet and can be deployed in the area with limited medical resources.

Keywords: TB smears, automated microscope, artificial intelligence, medical imaging

Procedia PDF Downloads 229
1185 The Significance of Islamic Concept of Good Faith to Cure Flaws in Public International Law

Authors: M. A. H. Barry

Abstract:

The concept of Good faith (husn al-niyyah) and fair-dealing (Nadl) are the fundamental guiding elements in all contracts and other agreements under Islamic law. The preaching of Al-Quran and Prophet Muhammad’s (Peace Be upon Him) firmly command people to act in good faith in all dealings. There are several Quran verses and the Prophet’s saying which stressed the significance of dealing honestly and fairly in all transactions. Under the English law, the good faith is not considered a fundamental requirement for the formation of a legal contract. However, the concept of Good Faith in private contracts is recognized by the civil law system and in Article 7(1) of the Convention on International Sale of Goods (CISG-Vienna Convention-1980). It took several centuries for the international trading community to recognize the significance of the concept of good faith for the international sale of goods transactions. Nevertheless, the recognition of good faith in Civil law is only confined for the commercial contracts. Subsequently to the CISG, this concept has made inroads into the private international law. There are submissions in favour of applying the good faith concept to public international law based on tacit recognition by the international conventions and International Tribunals. However, under public international law the concept of good faith is not recognized as a source of rights or obligations. This weakens the spirit of the good faith concept, particularly when determining the international disputes. This also creates a fundamental flaw because the absence of good faith application means the breaches tainted by bad faith are tolerated. The objective of this research is to evaluate, examine and analyze the application of the concept of good faith in the modern laws and identify its limitation, in comparison with Islamic concept of good faith. This paper also identifies the problems and issues connected with the non-application of this concept to public international law. This research consists of three key components (1) the preliminary inquiry (2) subject analysis and discovery of research results, and (3) examining the challenging problems, and concluding with proposals. The preliminary inquiry is based on both the primary and secondary sources. The same sources are used for the subject analysis. This research also has both inductive and deductive features. The Islamic concept of good faith covers all situations and circumstances where the bad faith causes unfairness to the affected parties, especially the weak parties. Under the Islamic law, the concept of good faith is a source of rights and obligations as Islam prohibits any person committing wrongful or delinquent acts in any dealing whether in a private or public life. This rule is applicable not only for individuals but also for institutions, states, and international organizations. This paper explains how the unfairness is caused by non-recognition of the good faith concept as a source of rights or obligations under public international law and provides legal and non-legal reasons to show why the Islamic formulation is important.

Keywords: good faith, the civil law system, the Islamic concept, public international law

Procedia PDF Downloads 148
1184 Protective Effect of the Histamine H3 Receptor Antagonist DL77 in Behavioral Cognitive Deficits Associated with Schizophrenia

Authors: B. Sadek, N. Khan, D. Łażewska, K. Kieć-Kononowicz

Abstract:

The effects of the non-imidazole histamine H3 receptor (H3R) antagonist DL77 in passive avoidance paradigm (PAP) and novel object recognition (NOR) task in MK801-induced cognitive deficits associated with schizophrenia (CDS) in adult male rats, and applying donepezil (DOZ) as a reference drug were investigated. The results show that acute systemic administration of DL77 (2.5, 5, and 10 mg/kg, i.p.) significantly improved MK801-induced (0.1 mg/kg, i.p.) memory deficits in PAP. The ameliorating activity of DL77 (5 mg/kg, i.p.) in MK801-induced deficits was partly reversed when rats were pretreated with the centrally-acting H2R antagonist zolantidine (ZOL, 10 mg/kg, i.p.) or with the antimuscarinic antagonist scopolamine (SCO, 0.1 mg/kg, i.p.), but not with the CNS penetrant H1R antagonist pyrilamine (PYR, 10 mg/kg, i.p.). Moreover, the memory enhancing effect of DL77 (5 mg/kg, i.p.) in MK801-induced memory deficits in PAP was strongly reversed when rats were pretreated with a combination of ZOL (10 mg/kg, i.p.) and SCO (1.0 mg/kg, i.p.). Furthermore, the significant ameliorative effect of DL77 (5 mg/kg, i.p.) on MK801-induced long-term memory (LTM) impairment in NOR test was comparable to the DOZ-provided memory-enhancing effect, and was abrogated when animals were pretreated with the histamine H3R agonist R-(α)-methylhistamine (RAMH, 10 mg/kg, i.p.). However, DL77(5 mg/kg, i.p.) failed to provide procognitive effect on MK801-induced short-term memory (STM) impairment in NOR test. In addition, DL77 (5 mg/kg) did not alter anxiety levels and locomotor activity of animals naive to elevated-plus maze (EPM), demonstrating that improved performances with DL77 (5 mg/kg) in PAP or NOR are unrelated to changes in emotional responding or spontaneous locomotor activity. These results provide evidence for the potential of H3Rs for the treatment of neurodegenerative disorders related to impaired memory function, e.g. CDS.

Keywords: histamine H3 receptor, antagonist, learning, memory impairment, passive avoidance paradigm, novel object recognition

Procedia PDF Downloads 203
1183 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body

Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi

Abstract:

The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.

Keywords: Accu-Check, diabetes, neural network, pattern recognition

Procedia PDF Downloads 147
1182 3D Human Face Reconstruction in Unstable Conditions

Authors: Xiaoyuan Suo

Abstract:

3D object reconstruction is a broad research area within the computer vision field involving many stages and still open problems. One of the existing challenges in this field lies with micromotion, such as the facial expressions on the appearance of the human or animal face. Similar literatures in this field focuses on 3D reconstruction in stable conditions such as an existing image or photos taken in a rather static environment, while the purpose of this work is to discuss a flexible scan system using multiple cameras that can correctly reconstruct 3D stable and moving objects -- human face with expression in particular. Further, a mathematical model is proposed at the end of this literature to automate the 3D object reconstruction process. The reconstruction process takes several stages. Firstly, a set of simple 2D lines would be projected onto the object and hence a set of uneven curvy lines can be obtained, which represents the 3D numerical data of the surface. The lines and their shapes will help to identify object’s 3D construction in pixels. With the two-recorded angles and their distance from the camera, a simple mathematical calculation would give the resulting coordinate of each projected line in an absolute 3D space. This proposed research will benefit many practical areas, including but not limited to biometric identification, authentications, cybersecurity, preservation of cultural heritage, drama acting especially those with rapid and complex facial gestures, and many others. Specifically, this will (I) provide a brief survey of comparable techniques existing in this field. (II) discuss a set of specialized methodologies or algorithms for effective reconstruction of 3D objects. (III)implement, and testing the developed methodologies. (IV) verify findings with data collected from experiments. (V) conclude with lessons learned and final thoughts.

Keywords: 3D photogrammetry, 3D object reconstruction, facial expression recognition, facial recognition

Procedia PDF Downloads 150
1181 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time

Procedia PDF Downloads 281
1180 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics

Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur

Abstract:

Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.

Keywords: human machine interface, industrial internet of things, internet of things, optical character recognition, video analytics

Procedia PDF Downloads 109
1179 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization

Procedia PDF Downloads 143
1178 An Introduction to Giulia Annalinda Neglia Viewpoint on Morphology of the Islamic City Using Written Content Analysis Approach

Authors: Mohammad Saber Eslamlou

Abstract:

Morphology of Islamic cities has been extensively studied by researchers of Islamic cities and different theories could be found about it. In this regard, there exist much difference in method of analysis, classification, recognition, confrontation and comparative method of urban morphology. The present paper aims to examine the previous methods, approaches and insights and that how Dr. Giulia Annalinda Neglia dealt with the analysis of morphology of Islamic cities. Neglia is assistant professor in University of Bari, Italy (UNIBA) who has published numerous papers and books on Islamic cities. I introduce her works in the field of morphology of Islamic cities. And then, her thoughts, insights and research methodologies are presented and analyzed in critical perspective. This is a qualitative research on her written works, which have been classified in three major categories. The first category consists mainly of her works on morphology and physical shape of Islamic cities. The results of her works’ review suggest that she has used Moratoria typology in investigating morphology of Islamic cities. Moreover, overall structure of the cities under investigation is often described linear; however, she’s against to define a single framework for the recognition of morphology in Islamic cities. She states that ‘to understand the physical complexity and irregularities in Islamic cities, it is necessary to study the urban fabric by typology method, focusing on transformation processes of the buildings’ form and their surrounding open spaces’ and she believes that fabric of each region in the city follows from the principles of an specific period or urban pattern, in particular, Hellenistic and Roman structures. Furthermore, she believes that it is impossible to understand the morphology of a city without taking into account the obvious and hidden developments associated with it, because form of building and their surrounding open spaces are written history of the city.

Keywords: city, Islamic city, Giulia Annalinda Neglia, morphology

Procedia PDF Downloads 97
1177 Tourist’s Perception and Identification of Landscape Elements of Traditional Village

Authors: Mengxin Feng, Feng Xu, Zhiyong Lai

Abstract:

As a typical representative of the countryside, traditional Chinese villages are rich in cultural landscape resources and historical information, but they are still in continuous decline. The problems of people's weak protection awareness and low cultural recognition are still serious, and the protection of cultural heritage is imminent. At the same time, with the rapid development of rural tourism, its cultural value has been explored and paid attention to again. From the perspective of tourists, this study aimed to explore people's perception and identity of cultural landscape resources under the current cultural tourism development background. We selected eleven typical landscape elements of Lingshui Village, a traditional village in Beijing, as research objects and conducted a questionnaire survey with two scales of perception and identity to explore the characteristics of people's perception and identification of landscape elements. We found that there was a strong positive correlation between the perception and identity of each element and that geographical location influenced visitors' overall perception. The perception dimensions scored the highest in location, and the lowest in history and culture, and the identity dimensions scored the highest in meaning and lowest in emotion. We analyzed the impact of visitors' backgrounds on people's perception and identity characteristics and found that age and education were two important factors. The elderly had a higher degree of perceived identity, as the familiarity effect increased their attention. Highly educated tourists had more stringent criteria for perception and identification. The above findings suggest strategies for conserving and optimizing landscape elements in the traditional village to improve the acceptance and recognition of cultural information in traditional villages, which will inject new vitality into the development of traditional villages.

Keywords: traditional village, tourist perception, landscape elements, perception and identity

Procedia PDF Downloads 146
1176 The Ameliorative Effects of the Histamine H3 Receptor Antagonist/Inverse Agonist DL77 on MK801-Induced Memory Deficits in Rats

Authors: B. Sadek, N. Khan, Shreesh K. Ojha, Adel Sadeq, D. Lazewska, K. Kiec-Kononowicz

Abstract:

The involvement of Histamine H3 receptors (H3Rs) in memory and the potential role of H3R antagonists in pharmacological control of neurodegenerative disorders, e.g., Alzheimer disease (AD) is well established. Therefore, the memory-enhancing effects of the H3R antagonist DL77 on MK801-induced cognitive deficits were evaluated in passive avoidance paradigm (PAP) and novel object recognition (NOR) tasks in adult male rats, applying donepezil (DOZ) as a reference drug. Animals pretreated with acute systemic administration of DL77 (2.5, 5, and 10 mg/kg, i.p.) were significantly ameliorated in regard to MK801-induced memory deficits in PAP. The ameliorative effect of most effective dose of DL77 (5 mg/kg, i.p.) was abrogated when animals were pretreated with a co-injection with the H3R agonist R-(α)-methylhistamine (RAMH, 10 mg/kg, i.p.). Moreover, and in the NOR paradigm, DL77 (5 mg/kg, i.p.) reversed MK801-induced deficits long-term memory (LTM), and the DL77-provided procognitive effect was comparable to that of reference drug DOZ, and was reversed when animals were co-injected with RAMH (10 mg/kg, i.p.). However, DL77(5 mg/kg, i.p.) failed to alter short-term memory (STM) impairment in NOR test. Furthermore, DL77 (5 mg/kg) failed to induce any alterations of anxiety and locomotor behaviors of animals naive to elevated-plus maze (EPM), indicating that the ameliorative effects observed in PAP or NOR tests were not associated to alterations in emotions or in natural locomotion of tested animals. These results reveal the potential contribution of H3Rs in modulating CNS neurotransmission systems associated with neurodegenerative disorders, e.g., AD.

Keywords: histamine H3 receptor, antagonist, learning and memory, Alzheimer's disease, neurodegeneration, passive avoidance paradigm, novel object recognition, behavioral research

Procedia PDF Downloads 155
1175 Similar Script Character Recognition on Kannada and Telugu

Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy

Abstract:

This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.

Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN

Procedia PDF Downloads 53
1174 Encoding the Design of the Memorial Park and the Family Network as the Icon of 9/11 in Amy Waldman's the Submission

Authors: Masami Usui

Abstract:

After 9/11, the American literary scene was confronted with new perspectives that enabled both writers and readers to recognize the hidden aspects of their political, economic, legal, social, and cultural phenomena. There appeared an argument over new and challenging multicultural aspects after 9/11 and this argument is presented by a tension of space related to 9/11. In Amy Waldman’s the Submission (2011), designing both the memorial park and the family network has a significant meaning in establishing the progress of understanding from multiple perspectives. The most intriguing and controversial topic of racism is reflected in the Submission, where one young architect’s blind entry to the competition for the memorial of Ground Zero is nominated, yet he is confronted with strong objections and hostility as soon as he turns out to be a Muslim named Mohammad Khan. This ‘Khan’ issue, immediately enlarged into a social controversial issue on American soil, causes repeated acts of hostility to Muslim women by ignorant citizens all over America. His idea of the park is to design a new concept of tracing the cultural background of the open space. Against his will, his name is identified as the ‘ingredient’ of the networking of the resistant community with his supporters: on the other hand, the post 9/11 hysteria and victimization is presented in such family associations as the Angry Family Members and Grieving Family Members. These rapidly expanding networks, whether political or not, constructed by the internet, embody the contemporary societal connection and representation. The contemporary quest for the significance of human relationships is recognized as a quest for global peace. Designing both the memorial park and the communication networks strengthens a process of facing the shared conflicts and healing the survivors’ trauma. The tension between the idea and networking of the Garden for the memorial site and the collapse of Ground Zero signifies the double mission of the site: to establish the space to ease the wounded and to remember the catastrophe. Reading the design of these icons of 9/11 in the Submission means that decoding the myth of globalization and its representations in this century.

Keywords: American literature, cultural studies, globalization, literature of catastrophe

Procedia PDF Downloads 533
1173 An Observation Approach of Reading Order for Single Column and Two Column Layout Template

Authors: In-Tsang Lin, Chiching Wei

Abstract:

Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.

Keywords: document processing, reading order, observation method, layout recognition

Procedia PDF Downloads 181