Search results for: minority forms of information processing
15733 Improved Safety Science: Utilizing a Design Hierarchy
Authors: Ulrica Pettersson
Abstract:
Collection of information on incidents is regularly done through pre-printed incident report forms. These tend to be incomplete and frequently lack essential information. ne consequence is that reports with inadequate information, that do not fulfil analysts’ requirements, are transferred into the analysis process. To improve an incident reporting form, theory in design science, witness psychology and interview and questionnaire research has been used. Previously three experiments have been conducted to evaluate the form and shown significant improved results. The form has proved to capture knowledge, regardless of the incidents’ character or context. The aim in this paper is to describe how design science, in more detail a design hierarchy can be used to construct a collection form for improvements in safety science.
Keywords: data collection, design science, incident reports, safety science
Procedia PDF Downloads 22315732 The Threat Posed by Dominant Languages to Minor Languages or Dialects: The Case of isiZulu and isiBhaca in Umzimkhulu, KwaZulu-Natal
Authors: Yanga Lusanda Praiseworth Majola
Abstract:
The small town of Umzimkhulu is situated in the KwaZulu-Natal province of South Africa and was once the Bantustan of Transkei. Citizens of Umzimkulu are called amaBhaca because they speak isiBhaca, which is a non-standard language but is mutually intelligible to three standard official languages, isiXhosa, isiZulu, and siSwati. Since Umzimkhulu was under the Eastern Cape Province prior to 2006, isiXhosa is used for official purposes, particularly in schools, then isiZulu is used in other sectors; this is despite the fact that the majority of Umzimkhulu citizens regard themselves as amaBhaca. This poses a threat to both isiBhaca as a language and the identity of amaBhaca because Umzimkhulu is situated in KZN, where isiZulu is the dominant language spoken by the majority in the province. The primary objective of this study is to unveil, using the language dominance theory, how dominant languages pose a threat to minority and developing languages or dialects. The study employed a mixed-methods approach. Data was obtained from key community members and leaders who were identified as amaBhaca, who have lived in Umzimkhulu their whole lives. The main findings of the study are that although isiBhaca is classified as a dialect of isiXhosa, linguistically, it is closer to isiZulu, and thus isiZulu poses much threat to the existence of isiBhaca since it becomes easy for amaBhaca to switch from isiBhaca to isiZulu and end up not having an interest in isiBhaca. Respondents revealed that in their view, isiBhaca is a language of its own, and the continuous use and empowerment of isiZulu in Umzimkhulu, particularly in the professional settings, is detrimental to isiBhaca, and this subsequently has the potential of endangering the existence of isiBhaca and might lead to its attrition.Keywords: language dominance, dominant languages, minority languages, language attrition
Procedia PDF Downloads 8715731 “Octopub”: Geographical Sentiment Analysis Using Named Entity Recognition from Social Networks for Geo-Targeted Billboard Advertising
Authors: Oussama Hafferssas, Hiba Benyahia, Amina Madani, Nassima Zeriri
Abstract:
Although data nowadays has multiple forms; from text to images, and from audio to videos, yet text is still the most used one at a public level. At an academical and research level, and unlike other forms, text can be considered as the easiest form to process. Therefore, a brunch of Data Mining researches has been always under its shadow, called "Text Mining". Its concept is just like data mining’s, finding valuable patterns in data, from large collections and tremendous volumes of data, in this case: Text. Named entity recognition (NER) is one of Text Mining’s disciplines, it aims to extract and classify references such as proper names, locations, expressions of time and dates, organizations and more in a given text. Our approach "Octopub" does not aim to find new ways to improve named entity recognition process, rather than that it’s about finding a new, and yet smart way, to use NER in a way that we can extract sentiments of millions of people using Social Networks as a limitless information source, and Marketing for product promotion as the main domain of application.Keywords: textmining, named entity recognition(NER), sentiment analysis, social media networks (SN, SMN), business intelligence(BI), marketing
Procedia PDF Downloads 58915730 Air–Water Two-Phase Flow Patterns in PEMFC Microchannels
Authors: Ibrahim Rassoul, A. Serir, E-K. Si Ahmed, J. Legrand
Abstract:
The acronym PEM refers to Proton Exchange Membrane or alternatively Polymer Electrolyte Membrane. Due to its high efficiency, low operating temperature (30–80 °C), and rapid evolution over the past decade, PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause “flooding” (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The experimental transparent fuel cell used in this work was designed to represent actual full scale of fuel cell geometry. According to the operating conditions, a number of flow regimes may appear in the microchannel: droplet flow, blockage water liquid bridge /plug (concave and convex forms), slug/plug flow and film flow. Some of flow patterns are new, while others have been already observed in PEMFC microchannels. An algorithm in MATLAB was developed to automatically determine the flow structure (e.g. slug, droplet, plug, and film) of detected liquid water in the test microchannels and yield information pertaining to the distribution of water among the different flow structures. A video processing algorithm was developed to automatically detect dynamic and static liquid water present in the gas channels and generate relevant quantitative information. The potential benefit of this software allows the user to obtain a more precise and systematic way to obtain measurements from images of small objects. The void fractions are also determined based on images analysis. The aim of this work is to provide a comprehensive characterization of two-phase flow in an operating fuel cell which can be used towards the optimization of water management and informs design guidelines for gas delivery microchannels for fuel cells and its essential in the design and control of diverse applications. The approach will combine numerical modeling with experimental visualization and measurements.Keywords: polymer electrolyte fuel cell, air-water two phase flow, gas diffusion layer, microchannels, advancing contact angle, receding contact angle, void fraction, surface tension, image processing
Procedia PDF Downloads 31215729 Mobile Augmented Reality for Collaboration in Operation
Authors: Chong-Yang Qiao
Abstract:
Mobile augmented reality (MAR) tracking targets from the surroundings and aids operators for interactive data and procedures visualization, potential equipment and system understandably. Operators remotely communicate and coordinate with each other for the continuous tasks, information and data exchange between control room and work-site. In the routine work, distributed control system (DCS) monitoring and work-site manipulation require operators interact in real-time manners. The critical question is the improvement of user experience in cooperative works through applying Augmented Reality in the traditional industrial field. The purpose of this exploratory study is to find the cognitive model for the multiple task performance by MAR. In particular, the focus will be on the comparison between different tasks and environment factors which influence information processing. Three experiments use interface and interaction design, the content of start-up, maintenance and stop embedded in the mobile application. With the evaluation criteria of time demands and human errors, and analysis of the mental process and the behavior action during the multiple tasks, heuristic evaluation was used to find the operators performance with different situation factors, and record the information processing in recognition, interpretation, judgment and reasoning. The research will find the functional properties of MAR and constrain the development of the cognitive model. Conclusions can be drawn that suggest MAR is easy to use and useful for operators in the remote collaborative works.Keywords: mobile augmented reality, remote collaboration, user experience, cognition model
Procedia PDF Downloads 19715728 Economized Sensor Data Processing with Vehicle Platooning
Authors: Henry Hexmoor, Kailash Yelasani
Abstract:
We present vehicular platooning as a special case of crowd-sensing framework where sharing sensory information among a crowd is used for their collective benefit. After offering an abstract policy that governs processes involving a vehicular platoon, we review several common scenarios and components surrounding vehicular platooning. We then present a simulated prototype that illustrates efficiency of road usage and vehicle travel time derived from platooning. We have argued that one of the paramount benefits of platooning that is overlooked elsewhere, is the substantial computational savings (i.e., economizing benefits) in acquisition and processing of sensory data among vehicles sharing the road. The most capable vehicle can share data gathered from its sensors with nearby vehicles grouped into a platoon.Keywords: cloud network, collaboration, internet of things, social network
Procedia PDF Downloads 19415727 A Controlled Natural Language Assisted Approach for the Design and Automated Processing of Service Level Agreements
Authors: Christopher Schwarz, Katrin Riegler, Erwin Zinser
Abstract:
The management of outsourcing relationships between IT service providers and their customers proofs to be a critical issue that has to be stipulated by means of Service Level Agreements (SLAs). Since service requirements differ from customer to customer, SLA content and language structures vary largely, standardized SLA templates may not be used and an automated processing of SLA content is not possible. Hence, SLA management is usually a time-consuming and inefficient manual process. For overcoming these challenges, this paper presents an innovative and ITIL V3-conform approach for automated SLA design and management using controlled natural language in enterprise collaboration portals. The proposed novel concept is based on a self-developed controlled natural language that follows a subject-predicate-object approach to specify well-defined SLA content structures that act as templates for customized contracts and support automated SLA processing. The derived results eventually enable IT service providers to automate several SLA request, approval and negotiation processes by means of workflows and business rules within an enterprise collaboration portal. The illustrated prototypical realization gives evidence of the practical relevance in service-oriented scenarios as well as the high flexibility and adaptability of the presented model. Thus, the prototype enables the automated creation of well defined, customized SLA documents, providing a knowledge representation that is both human understandable and machine processable.Keywords: automated processing, controlled natural language, knowledge representation, information technology outsourcing, service level management
Procedia PDF Downloads 43215726 High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform
Authors: Hanaa M. Abdelgawad, Mona Safar, Ayman M. Wahba
Abstract:
Real-time image and video processing is a demand in many computer vision applications, e.g. video surveillance, traffic management and medical imaging. The processing of those video applications requires high computational power. Therefore, the optimal solution is the collaboration of CPU and hardware accelerators. In this paper, a Canny edge detection hardware accelerator is proposed. Canny edge detection is one of the common blocks in the pre-processing phase of image and video processing pipeline. Our presented approach targets offloading the Canny edge detection algorithm from processing system (PS) to programmable logic (PL) taking the advantage of High Level Synthesis (HLS) tool flow to accelerate the implementation on Zynq platform. The resulting implementation enables up to a 100x performance improvement through hardware acceleration. The CPU utilization drops down and the frame rate jumps to 60 fps of 1080p full HD input video stream.Keywords: high level synthesis, canny edge detection, hardware accelerators, computer vision
Procedia PDF Downloads 47815725 Audio Information Retrieval in Mobile Environment with Fast Audio Classifier
Authors: Bruno T. Gomes, José A. Menezes, Giordano Cabral
Abstract:
With the popularity of smartphones, mobile apps emerge to meet the diverse needs, however the resources at the disposal are limited, either by the hardware, due to the low computing power, or the software, that does not have the same robustness of desktop environment. For example, in automatic audio classification (AC) tasks, musical information retrieval (MIR) subarea, is required a fast processing and a good success rate. However the mobile platform has limited computing power and the best AC tools are only available for desktop. To solve these problems the fast classifier suits, to mobile environments, the most widespread MIR technologies, seeking a balance in terms of speed and robustness. At the end we found that it is possible to enjoy the best of MIR for mobile environments. This paper presents the results obtained and the difficulties encountered.Keywords: audio classification, audio extraction, environment mobile, musical information retrieval
Procedia PDF Downloads 54515724 AI and the Future of Misinformation: Opportunities and Challenges
Authors: Noor Azwa Azreen Binti Abd. Aziz, Muhamad Zaim Bin Mohd Rozi
Abstract:
Moving towards the 4th Industrial Revolution, artificial intelligence (AI) is now more popular than ever. This subject is gaining significance every day and is continually expanding, often merging with other fields. Instead of merely being passive observers, there are benefits to understanding modern technology by delving into its inner workings. However, in a world teeming with digital information, the impact of AI on the spread of disinformation has garnered significant attention. The dissemination of inaccurate or misleading information is referred to as misinformation, posing a serious threat to democratic society, public debate, and individual decision-making. This article delves deep into the connection between AI and the dissemination of false information, exploring its potential, risks, and ethical issues as AI technology advances. The rise of AI has ushered in a new era in the dissemination of misinformation as AI-driven technologies are increasingly responsible for curating, recommending, and amplifying information on online platforms. While AI holds the potential to enhance the detection and mitigation of misinformation through natural language processing and machine learning, it also raises concerns about the amplification and propagation of false information. AI-powered deepfake technology, for instance, can generate hyper-realistic videos and audio recordings, making it increasingly challenging to discern fact from fiction.Keywords: artificial intelligence, digital information, disinformation, ethical issues, misinformation
Procedia PDF Downloads 9115723 A Retrospective Study of the Effects of Xenophobia on South Africa-Nigeria Relations
Authors: O. Fayomi, F. Chidozie, C. Ayo
Abstract:
The underlying causes of xenophobia are complex and varied. Xenophobia has to do with being contemptuous of that which is foreign, especially of strangers or of people from different countries or cultures. Unemployment and mounting poverty among South Africans at the bottom of the economic ladder have provoked fears of the competition that better educated and experienced migrants can represent. South Africa’s long track-record of violence as a means of protest and the targeting of foreigners in particular, and, the documented tensions over migration policy and the scale of repatriation serve a very good explanation for its xenophobia. It was clear that while most of the attacks were directed against foreign, primarily African, migrants, this was not the rule. Attacks were also noted against Chinese-speakers, Pakistani migrants as well as against South Africans from minority language groups (in the conflict areas). Settlements that have recently experienced the expression of ‘xenophobic’ violence have also been the site of violent and other forms of protest around other issues, most notably service delivery. The failure of government in service delivery was vexed on this form of xenophobia. Due to the increase in migration, this conflict is certainly not temporary in nature. Xenophobia manifests in different regions and communities with devastating effects on the affected nationals. Nigerians living in South Africa have been objects of severe attacks and assault as a result of this xenophobic attitude. It is against this background that this study seeks to investigate the xenophobic attacks against Nigerians in South Africa. The methodology is basically qualitative with the use of secondary sources such as books, journals, newspapers and internet sources.Keywords: xenophobia, unemployment, poverty, Nigeria, South Africa
Procedia PDF Downloads 47215722 Parallels between the Glass and Lavender Ceilings
Authors: Paul E. Olsen
Abstract:
Researchers, businesses, and governments study the glass ceiling faced by women and members of minority groups at work, but the experiences of gay men, lesbians, and bisexual men and women with the lavender ceiling have not received similar attention. This qualitative research traces similarities between the lavender ceiling and the glass ceiling. More specifically, it presents a study designed to elucidate the experiences of gay men at work and compare them with those of women and minority group members, as reported in research literature on the glass ceiling. This research asked: 1) What have gay men experienced in the workplace? 2) What experiences have they had with recruitment, mentors, corporate climate, advancement opportunities, performance evaluation, social activities, harassment, and task force and committee assignments? 3) How do these experiences compare with those of women and minorities who have described their experiences with the glass ceiling? Purposeful and convenience sampling were used as participant selection strategies. Participants were diverse in terms of age, education, and industry. Data for this study were collected through semi-structured individual interviews with eight self-identified gay men working in human services, manufacturing, marketing, finance, government, the nonprofit sector, and retail. The gay men in the study described workplace experiences similar to descriptions of the glass ceiling faced by women and minorities. The lavender ceiling parallels the glass ceiling in corporate climates, harassment, mentors, social activities, promotions and performance appraisal, and task force and committee assignments at work. Women and most minorities do not, however, face the disclosure dilemma: Should one reveal his sexual orientation at work?Keywords: discrimination, diversity, gay and lesbian, human resource
Procedia PDF Downloads 26715721 Empowering Minority Students Through the use of Critical Educational Technologies: Latinos in the United States
Authors: Oscar Guerra
Abstract:
Educational technologies have great potential as tools for student empowerment, particularly for members of a marginalized population such as immigrant Latino children in the American public education system. It is not merely a matter of access to the necessary technological devices; rather, it is development and implementation under a critical lens that may prompt a positive change.Keywords: education, critical technologies, minorities, higher education
Procedia PDF Downloads 32315720 Data Mining Spatial: Unsupervised Classification of Geographic Data
Authors: Chahrazed Zouaoui
Abstract:
In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.Keywords: mining, GIS, geo-clustering, neighborhood
Procedia PDF Downloads 37515719 'Pacta Sunt Servanda': Which Form of Contract to Use in the Construction Industry
Authors: Ahmed Stifi, Sascha Gentes
Abstract:
The contract in its simplest definition is an agreement involving parties with a number of documents which may be as little as a marriage contract involving two parties or as big as a contract of construction and operation of a nuclear power plant involving companies and stakeholders with hundreds or even thousands of documents. All parties in the construction industry, not only the contract experts, agree that the success of a project is linked primarily to the form of contract regulating the relationship between stakeholders of the project. Therefore it is essential for the construction industry to study, analyze and improve its contracts forms continuously. However, it should be mentioned that different contract forms are developed to suit the construction evolution in term of its machinery, materials and construction process. There exist some similarities in some clauses and variations in many of these forms depending upon the type of project, the kind of clients and more importantly the laws and regulations governing the transaction in the country where the project is carried out. This paper will discuss the most important forms of construction contracts starting from national level, intended to the contract form in Germany and moving on to the international level introducing FIDIC contracts and its different forms, some newly developed contracts forms namely the integrated form of agreement, the new engineering contract and the project alliance agreement. The result of the study shows that many of the contract’s paragraphs are similar and the main difference comes in the approach of the relationship between the parties. Is it based on co-operation and mutual trust, or in some cases a load of responsibility for a particular party which increases the problems and disputes that affects the success of the project negatively. Thus we can say that the form of the contract, that plays an essential role in the approach of the project management, which is ultimately the key factor for the success of the project. So we advise to use a form of contract, which enhance the mutual trust between the project parties, contribute to support the cooperation between them, distribute responsibility and risks on an equitable basis and build on the principle “win-win". In additional to the conventional role of the contract it should integrate all parties into one team to achieve the target value of the project.Keywords: contract, FIDIC, integrated form of agreement, new engineering contract, project alliance agreemen
Procedia PDF Downloads 37315718 Challenges in Teaching Code of Ethics and Professional Conduct
Authors: Rasika Dayarathna
Abstract:
Computing has reached every corner of our lives in many forms. The Internet, particularly Social Media, Artificial Intelligence, are prominent among them. As a result, computing has changed our lives and it is expected that severe changes will take place in the coming years. It has introduced a new set of ethical challenges and amplified the existing ethical challenges. It is the duty of everyone involved from conceptualizing, designing, implementing, deploying, and using to follow generally accepted practices in order to avoid or minimize harm and improve the quality of life. Since computing in various forms mentioned above has a significant impact on our lives, various codes of conduct and standards have been introduced. Among many, the ACM (Association of Computing Machinery) Code of Ethics and Professional Conduct is a leading one. This was drafted for everyone, including aspiring computing professionals. However, teaching a code of conduct for aspiring computing professionals is very challenging since this universal code needs to be taught for young computing professionals in a local setting where there are value mismatches and exposure to information systems. This paper discusses the importance of teaching the code, how to overcome the challenges, and suggestions to improve the code to make it more appealing and buying in. It is expected that the improved approach would contribute to improving the quality of life.Keywords: code of conduct, professionalism, ethics, code of ethics, ethics education, moral development
Procedia PDF Downloads 18115717 A Conglomerate of Multiple Optical Character Recognition Table Detection and Extraction
Authors: Smita Pallavi, Raj Ratn Pranesh, Sumit Kumar
Abstract:
Information representation as tables is compact and concise method that eases searching, indexing, and storage requirements. Extracting and cloning tables from parsable documents is easier and widely used; however, industry still faces challenges in detecting and extracting tables from OCR (Optical Character Recognition) documents or images. This paper proposes an algorithm that detects and extracts multiple tables from OCR document. The algorithm uses a combination of image processing techniques, text recognition, and procedural coding to identify distinct tables in the same image and map the text to appropriate the corresponding cell in dataframe, which can be stored as comma-separated values, database, excel, and multiple other usable formats.Keywords: table extraction, optical character recognition, image processing, text extraction, morphological transformation
Procedia PDF Downloads 14315716 Graph-Oriented Summary for Optimized Resource Description Framework Graphs Streams Processing
Authors: Amadou Fall Dia, Maurras Ulbricht Togbe, Aliou Boly, Zakia Kazi Aoul, Elisabeth Metais
Abstract:
Existing RDF (Resource Description Framework) Stream Processing (RSP) systems allow continuous processing of RDF data issued from different application domains such as weather station measuring phenomena, geolocation, IoT applications, drinking water distribution management, and so on. However, processing window phase often expires before finishing the entire session and RSP systems immediately delete data streams after each processed window. Such mechanism does not allow optimized exploitation of the RDF data streams as the most relevant and pertinent information of the data is often not used in a due time and almost impossible to be exploited for further analyzes. It should be better to keep the most informative part of data within streams while minimizing the memory storage space. In this work, we propose an RDF graph summarization system based on an explicit and implicit expressed needs through three main approaches: (1) an approach for user queries (SPARQL) in order to extract their needs and group them into a more global query, (2) an extension of the closeness centrality measure issued from Social Network Analysis (SNA) to determine the most informative parts of the graph and (3) an RDF graph summarization technique combining extracted user query needs and the extended centrality measure. Experiments and evaluations show efficient results in terms of memory space storage and the most expected approximate query results on summarized graphs compared to the source ones.Keywords: centrality measures, RDF graphs summary, RDF graphs stream, SPARQL query
Procedia PDF Downloads 20315715 Establishment of Precision System for Underground Facilities Based on 3D Absolute Positioning Technology
Authors: Yonggu Jang, Jisong Ryu, Woosik Lee
Abstract:
The study aims to address the limitations of existing underground facility exploration equipment in terms of exploration depth range, relative depth measurement, data processing time, and human-centered ground penetrating radar image interpretation. The study proposed the use of 3D absolute positioning technology to develop a precision underground facility exploration system. The aim of this study is to establish a precise exploration system for underground facilities based on 3D absolute positioning technology, which can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The study developed software and hardware technologies to build the precision exploration system. The software technologies developed include absolute positioning technology, ground surface location synchronization technology of GPR exploration equipment, GPR exploration image AI interpretation technology, and integrated underground space map-based composite data processing technology. The hardware systems developed include a vehicle-type exploration system and a cart-type exploration system. The data was collected using the developed exploration system, which employs 3D absolute positioning technology. The GPR exploration images were analyzed using AI technology, and the three-dimensional location information of the explored precise underground facilities was compared to the integrated underground space map. The study successfully developed a precision underground facility exploration system based on 3D absolute positioning technology. The developed exploration system can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The system comprises software technologies that build a 3D precise DEM, synchronize the GPR sensor's ground surface 3D location coordinates, automatically analyze and detect underground facility information in GPR exploration images and improve accuracy through comparative analysis of the three-dimensional location information, and hardware systems, including a vehicle-type exploration system and a cart-type exploration system. The study's findings and technological advancements are essential for underground safety management in Korea. The proposed precision exploration system significantly contributes to establishing precise location information of underground facility information, which is crucial for underground safety management and improves the accuracy and efficiency of exploration. The study addressed the limitations of existing equipment in exploring underground facilities, proposed 3D absolute positioning technology-based precision exploration system, developed software and hardware systems for the exploration system, and contributed to underground safety management by providing precise location information. The developed precision underground facility exploration system based on 3D absolute positioning technology has the potential to provide accurate and efficient exploration of underground facilities up to a depth of 5m. The system's technological advancements contribute to the establishment of precise location information of underground facility information, which is essential for underground safety management in Korea.Keywords: 3D absolute positioning, AI interpretation of GPR exploration images, complex data processing, integrated underground space maps, precision exploration system for underground facilities
Procedia PDF Downloads 6215714 Managers’ Mobile Information Behavior in an Openness Paradigm Era
Authors: Abd Latif Abdul Rahman, Zuraidah Arif, Muhammad Faizal Iylia, Mohd Ghazali, Asmadi Mohammed Ghazali
Abstract:
Mobile information is a significant access point for human information activities. Theories and models of human information behavior have developed over several decades but have not yet considered the role of the user’s computing device in digital information interactions. This paper reviews the literature that leads to developing a conceptual framework of a study on the managers mobile information behavior. Based on the literature review, dimensions of mobile information behavior are identified, namely, dimension information needs, dimension information access, information retrieval and dimension of information use. The study is significant to understand the nature of librarians’ behavior in searching, retrieving and using information via the mobile device. Secondly, the study would provide suggestions about various kinds of mobile applications which organization can provide for their staff to improve their services.Keywords: mobile information behavior, information behavior, mobile information, mobile devices
Procedia PDF Downloads 34915713 Quantum Entangled States and Image Processing
Authors: Sanjay Singh, Sushil Kumar, Rashmi Jain
Abstract:
Quantum registering is another pattern in computational hypothesis and a quantum mechanical framework has a few helpful properties like Entanglement. We plan to store data concerning the structure and substance of a basic picture in a quantum framework. Consider a variety of n qubits which we propose to use as our memory stockpiling. In recent years classical processing is switched to quantum image processing. Quantum image processing is an elegant approach to overcome the problems of its classical counter parts. Image storage, retrieval and its processing on quantum machines is an emerging area. Although quantum machines do not exist in physical reality but theoretical algorithms developed based on quantum entangled states gives new insights to process the classical images in quantum domain. Here in the present work, we give the brief overview, such that how entangled states can be useful for quantum image storage and retrieval. We discuss the properties of tripartite Greenberger-Horne-Zeilinger and W states and their usefulness to store the shapes which may consist three vertices. We also propose the techniques to store shapes having more than three vertices.Keywords: Greenberger-Horne-Zeilinger, image storage and retrieval, quantum entanglement, W states
Procedia PDF Downloads 30615712 Regulation on the Protection of Personal Data Versus Quality Data Assurance in the Healthcare System Case Report
Authors: Elizabeta Krstić Vukelja
Abstract:
Digitization of personal data is a consequence of the development of information and communication technologies that create a new work environment with many advantages and challenges, but also potential threats to privacy and personal data protection. Regulation (EU) 2016/679 of the European Parliament and of the Council is becoming a law and obligation that should address the issues of personal data protection and information security. The existence of the Regulation leads to the conclusion that national legislation in the field of virtual environment, protection of the rights of EU citizens and processing of their personal data is insufficiently effective. In the health system, special emphasis is placed on the processing of special categories of personal data, such as health data. The healthcare industry is recognized as a particularly sensitive area in which a large amount of medical data is processed, the digitization of which enables quick access and quick identification of the health insured. The protection of the individual requires quality IT solutions that guarantee the technical protection of personal categories. However, the real problems are the technical and human nature and the spatial limitations of the application of the Regulation. Some conclusions will be drawn by analyzing the implementation of the basic principles of the Regulation on the example of the Croatian health care system and comparing it with similar activities in other EU member states.Keywords: regulation, healthcare system, personal dana protection, quality data assurance
Procedia PDF Downloads 3815711 Vibroacoustic Modulation with Chirp Signal
Authors: Dong Liu
Abstract:
By sending a high-frequency probe wave and a low-frequency pump wave to a specimen, the vibroacoustic method evaluates the defect’s severity according to the modulation index of the received signal. Many studies experimentally proved the significant sensitivity of the modulation index to the tiny contact type defect. However, it has also been found that the modulation index was highly affected by the frequency of probe or pump waves. Therefore, the chirp signal has been introduced to the VAM method since it can assess multiple frequencies in a relatively short time duration, so the robustness of the VAM method could be enhanced. Consequently, the signal processing method needs to be modified accordingly. Various studies utilized different algorithms or combinations of algorithms for processing the VAM signal method by chirp excitation. These signal process methods were compared and used for processing a VAM signal acquired from the steel samples.Keywords: vibroacoustic modulation, nonlinear acoustic modulation, nonlinear acoustic NDT&E, signal processing, structural health monitoring
Procedia PDF Downloads 9915710 Colorful Ethnoreligious Map of Iraq and the Current Situation of Minorities in the Country
Authors: Meszár Tárik
Abstract:
The aim of the study is to introduce the minority groups living in Iraq and to shed light on their current situation. The Middle East is a rather heterogeneous region in ethnic terms. It includes many ethnic, national, religious, linguistic, or ethnoreligious groups. The relationship between the majority and minority is the main cause of various conflicts in the region. It seems that most of the post-Ottoman states have not yet developed a unified national identity capable of integrating their multi-ethnic societies. The issue of minorities living in the Middle East is highly politicized and controversial, as the various Arab states consider the treatment of minorities as their internal affair, do not recognize discrimination or even deny the existence of any kind of minorities on their territory. This attitude of the Middle Eastern states may also be due to the fact that the minority issue can be abused and can serve as a reference point for the intervention policies of Western countries at any time. Methodologically, the challenges of these groups are perceived through the manifestos of prominent individuals and organizations belonging to minorities. The basic aim is to present the minorities’ own history in dealing with the issue. It also introduces the different ethnic and religious minorities in Iraq and analyzes their situation during the operation of the terrorist organization „Islamic State” and in the aftermath. It is clear that the situation of these communities deteriorated significantly with the advance of ISIS, but it is also clear that even after the expulsion of the militant group, we cannot necessarily report an improvement in this area, especially in terms of the ability of minorities to assert their interests and physical security. The emergence of armed militias involved in the expulsion of ISIS sometimes has extremely negative effects on them. Until the interests of non-Muslims are adequately represented at the local level and in the legislature, most experts and advocates believe that little will change in their situation. When conflicts flare, many Iraqi citizens usually leave Iraq, but because of the poor public security situation (threats from terrorist organizations, interventions by other countries), emigration causes serious problems not only outside the country’s borders but also within the country. Another ominous implication for minorities is that their communities are very slow if ever, to return to their homes after fleeing their own settlements. An important finding of the study is that this phenomenon is changing the face of traditional Iraqi settlements and threatens to plunge groups that have lived there for thousands of years into the abyss of history. Therefore, we not only present the current situation of minorities living in Iraq but also discuss their future possibilities.Keywords: Middle East, Iraq, Islamic State, minorities
Procedia PDF Downloads 8615709 Evaluating Effectiveness of Training and Development Corporate Programs: The Russian Agribusiness Context
Authors: Ekaterina Tikhonova
Abstract:
This research is aimed to evaluate the effectiveness of T&D (Training and Development) on the example of two T&D programs for the Executive TOP Management run in 2012, 2015-2016 in Komos Group. This study is commissioned to research the effectiveness of two similar corporate T&D programs (within one company) in two periods of time (2012, 2015-2016) through evaluating the programs’ effectiveness using the four-level Kirkpatrick’s model of evaluating T&D programs and calculating ROI as an instrument for T&D program measuring by Phillips’ formula. The research investigates the correlation of two figures: the ROI calculated and the rating percentage scale per the ROI implementation (Wagle’s scale). The study includes an assessment of feedback 360 (Kirkpatrick's model) and Phillips’ ROI Methodology that provides a step-by-step process for collecting data, summarizing and processing the collected information. The data is collected from the company accounting data, the HR budgets, MCFO and the company annual reports for the research periods. All analyzed data and reports are organized and presented in forms of tables, charts, and graphs. The paper also gives a brief description of some constrains of the research considered. After ROI calculation, the study reveals that ROI ranges between the average implementation (65% to 75%) by Wagle’s scale that can be considered as a positive outcome. The paper also gives some recommendations how to use ROI in practice and describes main benefits of ROI implementation.Keywords: ROI, organizational performance, efficacy of T&D program, employee performance
Procedia PDF Downloads 25015708 Motion Estimator Architecture with Optimized Number of Processing Elements for High Efficiency Video Coding
Authors: Seongsoo Lee
Abstract:
Motion estimation occupies the heaviest computation in HEVC (high efficiency video coding). Many fast algorithms such as TZS (test zone search) have been proposed to reduce the computation. Still the huge computation of the motion estimation is a critical issue in the implementation of HEVC video codec. In this paper, motion estimator architecture with optimized number of PEs (processing element) is presented by exploiting early termination. It also reduces hardware size by exploiting parallel processing. The presented motion estimator architecture has 8 PEs, and it can efficiently perform TZS with very high utilization of PEs.Keywords: motion estimation, test zone search, high efficiency video coding, processing element, optimization
Procedia PDF Downloads 36315707 Impact of Mass Customization for 3D Geographic Information Systems under Turbulent Environments
Authors: Abdo Shabah
Abstract:
Mass customization aims to produce customized goods (allowing economies of scope) at lower cost (to achieve economies of scale) using multiple strategies (modularization and postponement). Through a simulation experiment of organizations under turbulent environment, we aim to compare standardization and mass customization of services and assess the impact of different forms of mass customization (early and late postponement) on performance, quality and consumer satisfaction, on the use of modular dynamic 3D Geographic Information System. Our hypothesis is that mass customization performs better and achieves better quality in turbulent environment than standardization, but only when using early postponement strategies. Using mixed methods study, we try to confirm our hypothesis.Keywords: mass customization, postponement, experiment, performance, quality, satisfaction, 3D GIS
Procedia PDF Downloads 45315706 Adaptive Data Approximations Codec (ADAC) for AI/ML-based Cyber-Physical Systems
Authors: Yong-Kyu Jung
Abstract:
The fast growth in information technology has led to de-mands to access/process data. CPSs heavily depend on the time of hardware/software operations and communication over the network (i.e., real-time/parallel operations in CPSs (e.g., autonomous vehicles). Since data processing is an im-portant means to overcome the issue confronting data management, reducing the gap between the technological-growth and the data-complexity and channel-bandwidth. An adaptive perpetual data approximation method is intro-duced to manage the actual entropy of the digital spectrum. An ADAC implemented as an accelerator and/or apps for servers/smart-connected devices adaptively rescales digital contents (avg.62.8%), data processing/access time/energy, encryption/decryption overheads in AI/ML applications (facial ID/recognition).Keywords: adaptive codec, AI, ML, HPC, cyber-physical, cybersecurity
Procedia PDF Downloads 7815705 The Need for Automation in the Domestic Food Processing Sector and its Impact
Authors: Shantam Gupta
Abstract:
The objective of this study is to address the critical need for automation in the domestic food processing sector and study its impact. Food is the one of the most basic physiological needs essential for the survival of a living being. Some of them have the capacity to prepare their own food (like most plants) and henceforth are designated as primary food producers; those who depend on these primary food producers for food form the primary consumers’ class (herbivores). Some of the organisms relying on the primary food are the secondary food consumers (carnivores). There is a third class of consumers called tertiary food consumers/apex food consumers that feed on both the primary and secondary food consumers. Humans form an essential part of the apex predators and are generally at the top of the food chain. But still further disintegration of the food habits of the modern human i.e. Homo sapiens, reveals that humans depend on other individuals for preparing their own food. The old notion of eating raw/brute food is long gone and food processing has become very trenchant in lives of modern human. This has led to an increase in dependence on other individuals for ‘processing’ the food before it can be actually consumed by the modern human. This has led to a further shift of humans in the classification of food chain of consumers. The effects of the shifts shall be systematically investigated in this paper. The processing of food has a direct impact on the economy of the individual (consumer). Also most individuals depend on other processing individuals for the preparation of food. This dependency leads to establishment of a vital link of dependency in the food web which when altered can adversely affect the food web and can have dire consequences on the health of the individual. This study investigates the challenges arising out due to this dependency and the impact of food processing on the economy of the individual. A comparison of Industrial food processing and processing at domestic platforms (households and restaurants) has been made to provide an idea about the present scenario of automation in the food processing sector. A lot of time and energy is also consumed while processing food at home for consumption. The high frequency of consumption of meals (greater than 2 times a day) makes it even more laborious. Through the medium of this study a pressing need for development of an automatic cooking machine is proposed with a mission to reduce the inter-dependency & human effort of individuals required for the preparation of food (by automation of the food preparation process) and make them more self-reliant The impact of development of this product has also further been profoundly discussed. Assumption used: The individuals those who process food also consume the food that they produce. (They are also termed as ‘independent’ or ‘self-reliant’ modern human beings.)Keywords: automation, food processing, impact on economy, processing individual
Procedia PDF Downloads 47015704 Towards Logical Inference for the Arabic Question-Answering
Authors: Wided Bakari, Patrice Bellot, Omar Trigui, Mahmoud Neji
Abstract:
This article constitutes an opening to think of the modeling and analysis of Arabic texts in the context of a question-answer system. It is a question of exceeding the traditional approaches focused on morphosyntactic approaches. Furthermore, we present a new approach that analyze a text in order to extract correct answers then transform it to logical predicates. In addition, we would like to represent different levels of information within a text to answer a question and choose an answer among several proposed. To do so, we transform both the question and the text into logical forms. Then, we try to recognize all entailment between them. The results of recognizing the entailment are a set of text sentences that can implicate the user’s question. Our work is now concentrated on an implementation step in order to develop a system of question-answering in Arabic using techniques to recognize textual implications. In this context, the extraction of text features (keywords, named entities, and relationships that link them) is actually considered the first step in our process of text modeling. The second one is the use of techniques of textual implication that relies on the notion of inference and logic representation to extract candidate answers. The last step is the extraction and selection of the desired answer.Keywords: NLP, Arabic language, question-answering, recognition text entailment, logic forms
Procedia PDF Downloads 342