Search results for: key information documents
11372 Analysis of State Documents on Environmental Awareness Aspects in Kazakhstan
Authors: Y. A. Kumar
Abstract:
Environmental awareness issues in Kazakhstan are one of the most undermined topics both among the public community and in terms of state rhetoric. In the context of official state documents, so far only two official environmental codes and national programs called Zhasyl Kazakhstan were introduced in the country in 2021. While on the one hand the Environmental Code was introduced with the purpose to modernize, frame and enlist main legislative aspects on various sectors of environmental law in Kazakhstan, on the other hand, the Zhasyl Kazakhstan Program has been implemented as a state program to address with numerous environmental projects various environmental issues ranging from air pollution to waste management as well as aspects related to ecological education and low environmental awareness matters. In this regard, the main goal of this paper is to analyze critically the main content of both of these documents with a particular focus on sections related to environmental awareness-raising aspects. For that, this paper applied a subjective-based content analysis in order to identify interesting insights on regulatory legal aspects, future research streams, and uncovering of improved legislative frameworks in the context of an environmental awareness issue. Apart from that, five open-ended questions were sent out to the Ministry of Ecology, Geology and Natural Resources to obtain primary data on the state’s view in regards to current previous, recent and future aspects of environmental awareness issues in the country.Keywords: Kazakhstan, environmental awareness, environmental code, Zhasyl Kazakhstan, content analysis
Procedia PDF Downloads 9511371 A Transformer-Based Question Answering Framework for Software Contract Risk Assessment
Authors: Qisheng Hu, Jianglei Han, Yue Yang, My Hoa Ha
Abstract:
When a company is considering purchasing software for commercial use, contract risk assessment is critical to identify risks to mitigate the potential adverse business impact, e.g., security, financial and regulatory risks. Contract risk assessment requires reviewers with specialized knowledge and time to evaluate the legal documents manually. Specifically, validating contracts for a software vendor requires the following steps: manual screening, interpreting legal documents, and extracting risk-prone segments. To automate the process, we proposed a framework to assist legal contract document risk identification, leveraging pre-trained deep learning models and natural language processing techniques. Given a set of pre-defined risk evaluation problems, our framework utilizes the pre-trained transformer-based models for question-answering to identify risk-prone sections in a contract. Furthermore, the question-answering model encodes the concatenated question-contract text and predicts the start and end position for clause extraction. Due to the limited labelled dataset for training, we leveraged transfer learning by fine-tuning the models with the CUAD dataset to enhance the model. On a dataset comprising 287 contract documents and 2000 labelled samples, our best model achieved an F1 score of 0.687.Keywords: contract risk assessment, NLP, transfer learning, question answering
Procedia PDF Downloads 13011370 Methodology of Automation and Supervisory Control and Data Acquisition for Restructuring Industrial Systems
Authors: Lakhoua Najeh
Abstract:
Introduction: In most situations, an industrial system already existing, conditioned by its history, its culture and its context are in difficulty facing the necessity to restructure itself in an organizational and technological environment in perpetual evolution. This is why all operations of restructuring first of all require a diagnosis based on a functional analysis. After a presentation of the functionality of a supervisory system for complex processes, we present the concepts of industrial automation and supervisory control and data acquisition (SCADA). Methods: This global analysis exploits the various available documents on the one hand and takes on the other hand in consideration the various testimonies through investigations, the interviews or the collective workshops; otherwise, it also takes observations through visits as a basis and even of the specific operations. The exploitation of this diagnosis enables us to elaborate the project of restructuring thereafter. Leaving from the system analysis for the restructuring of industrial systems, and after a technical diagnosis based on visits, an analysis of the various technical documents and management as well as on targeted interviews, a focusing retailing the various levels of analysis has been done according a general methodology. Results: The methodology adopted in order to contribute to the restructuring of industrial systems by its participative and systemic character and leaning on a large consultation a lot of human resources that of the documentary resources, various innovating actions has been proposed. These actions appear in the setting of the TQM gait requiring applicable parameter quantification and a treatment valorising some information. The new management environment will enable us to institute an information and communication system possibility of migration toward an ERP system. Conclusion: Technological advancements in process monitoring, control and industrial automation over the past decades have contributed greatly to improve the productivity of virtually all industrial systems throughout the world. This paper tries to identify the principles characteristics of a process monitoring, control and industrial automation in order to provide tools to help in the decision-making process.Keywords: automation, supervision, SCADA, TQM
Procedia PDF Downloads 17911369 Improving the Performance of Requisition Document Online System for Royal Thai Army by Using Time Series Model
Authors: D. Prangchumpol
Abstract:
This research presents a forecasting method of requisition document demands for Military units by using Exponential Smoothing methods to analyze data. The data used in the forecast is an actual data requisition document of The Adjutant General Department. The results of the forecasting model to forecast the requisition of the document found that Holt–Winters’ trend and seasonality method of α=0.1, β=0, γ=0 is appropriate and matches for requisition of documents. In addition, the researcher has developed a requisition online system to improve the performance of requisition documents of The Adjutant General Department, and also ensuring that the operation can be checked.Keywords: requisition, holt–winters, time series, royal thai army
Procedia PDF Downloads 30811368 Early Childhood Education: Teachers Ability to Assess
Authors: Ade Dwi Utami
Abstract:
Pedagogic competence is the basic competence of teachers to perform their tasks as educators. The ability to assess has become one of the demands in teachers pedagogic competence. Teachers ability to assess is related to curriculum instructions and applications. This research is aimed at obtaining data concerning teachers ability to assess that comprises of understanding assessment, determining assessment type, tools and procedure, conducting assessment process, and using assessment result information. It uses mixed method of explanatory technique in which qualitative data is used to verify the quantitative data obtained through a survey. The technique of quantitative data collection is by test whereas the qualitative data collection is by observation, interview and documentation. Then, the analyzed data is processed through a proportion study technique to be categorized into high, medium and low. The result of the research shows that teachers ability to assess can be grouped into 3 namely, 2% of high, 4% of medium and 94% of low. The data shows that teachers ability to assess is still relatively low. Teachers are lack of knowledge and comprehension in assessment application. The statement is verified by the qualitative data showing that teachers did not state which aspect was assessed in learning, record children’s behavior, and use the data result as a consideration to design a program. Teachers have assessment documents yet they only serve as means of completing teachers administration for the certification program. Thus, assessment documents were not used with the basis of acquired knowledge. The condition should become a consideration of the education institution of educators and the government to improve teachers pedagogic competence, including the ability to assess.Keywords: assessment, early childhood education, pedagogic competence, teachers
Procedia PDF Downloads 24611367 Slovenian Spatial Legislation over Time and Its Issues
Authors: Andreja Benko
Abstract:
Article presents a short overview of the architects’ profession over time with outlined work of the architectural theoreticians. In the continuation is described a former affiliation of Slovenia as well as the spatial planning documents that were in use until the Slovenia joint Yugoslavia (last part in 1919). This legislation from former Austro-Hungarian monarchy was valid almost until 1950 in some parts of Yugoslavia even longer. Upon that will be mentioned some valid Slovenian spatial documents which will be compared with the German legislation. Analysed will be the number of architect and spatial planners in Slovenia and also their number upon certain region in Slovenia. Based on that will be given also the number from statistical office of Slovenia of the number of buildings between years 2007 and 2012, and described also the collapse of the major construction companies in Slovenia and consequences of that. At the end will be outlined the morality and ethics by spatial interventions and lack of the architectural law in Slovenia as well as the problematic of minimal collaboration between the Ministry of infrastructure and spatial planning with the profession.Keywords: architect, history, legislation, Slovenia
Procedia PDF Downloads 36011366 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach
Authors: Kanika Gupta, Ashok Kumar
Abstract:
Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database
Procedia PDF Downloads 17211365 Evaluation of Environmental, Social, and Governance Factors by U.S. Tolling Authorities in Bond Issuance Disclosures
Authors: Nicolas D. Norboge
Abstract:
Purchasers of municipal bonds in primary and secondary markets are increasingly expecting issuers to disclose environmental, social, and governance factors (ESG) inissuance and continuing disclosure documents. U.S. tolling authorities are slowly catching up with other transportation sectors, such as public transit, in integrating ESG factors into their bond disclosure documents. A systematic mixed-methods evaluation of publicly available bond disclosure documents from 2010-2022 suggest that only a small number of U.S. tolling authorities disclosedall ESG factors; however, the pace has accelerated significantly from 2020-2022. Because many tolling authorities have a direct financial stake in the growth of passenger vehicle miles traveled on their toll facilities, and in turn the burning of more climate-warming fossil fuels, one crucial questionthat remains is how bond purchasers will view increasedESG transparency. Recent moves by large institutional investors, credit rating agencies, and regulators suggestan expectation of ESG disclosure is a trend likely to endure. This researchsuggests tolling authorities will need to proactively consider these emerging trends and carefully adapt their disclosure practiceswhere possible. Building on these findings, this research also provides a basic sketch framework for how issuers can responsibly position themselves within the changing global municipal debt marketplace.Keywords: debt policy, ESG, municipal bonds, public-private partnerships, public tolling authorities, transportation finance, and policy
Procedia PDF Downloads 18011364 Secure Text Steganography for Microsoft Word Document
Authors: Khan Farhan Rafat, M. Junaid Hussain
Abstract:
Seamless modification of an entity for the purpose of hiding a message of significance inside its substance in a manner that the embedding remains oblivious to an observer is known as steganography. Together with today's pervasive registering frameworks, steganography has developed into a science that offers an assortment of strategies for stealth correspondence over the globe that must, however, need a critical appraisal from security breach standpoint. Microsoft Word is amongst the preferably used word processing software, which comes as a part of the Microsoft Office suite. With a user-friendly graphical interface, the richness of text editing, and formatting topographies, the documents produced through this software are also most suitable for stealth communication. This research aimed not only to epitomize the fundamental concepts of steganography but also to expound on the utilization of Microsoft Word document as a carrier for furtive message exchange. The exertion is to examine contemporary message hiding schemes from security aspect so as to present the explorative discoveries and suggest enhancements which may serve a wellspring of information to encourage such futuristic research endeavors.Keywords: hiding information in plain sight, stealth communication, oblivious information exchange, conceal, steganography
Procedia PDF Downloads 24311363 Quantitative Method of Measurement for the Rights and Obligations of Contracting Parties in Standard Forms of Contract in Malaysia: A Case Study
Authors: Sim Nee Ting, Lan Eng Ng
Abstract:
Standard forms of contract in Malaysia are pre-written, printed contractual documents drafted by recognised authoritative bodies in order to describe the rights and obligations of the contracting parties in all construction projects in Malaysia. Studies and form revisions are usually conducted in a relatively random and qualitative manner, but the search of contractual documents idealization remains. It is not clear how these qualitative findings could be helpful for contractual documents improvements and re-drafting. This study aims to quantitatively and systematically analyse and evaluate the rights and obligations of the contracting parties as stated in the standard forms of contract. The Institution of Engineers Malaysia (IEM) published a new standard form of contract in 2012 with a total of 63 classes but the improvements and changes in the newly revised form that are yet to be analysed. IEM form will be used as the case study for this study. Every clause in this said form were interpreted and analysed according to the involved parties including contractor, engineer and employer. Modified from Matrix Method and Likert Scale, the result analysis were conducted based on a scale from 0 to 1 with five ratings namely “Very Unbalance”, “Unbalance”, “Balance”, “Good Balance” and “Very Good Balance”. It is hoped that quantitative method of form study can be used for future form revisions and any new forms drafting so to reduce on any subjectivity in standard forms of contract studies.Keywords: contracting parties, Malaysia, obligations, quantitative measurement, rights, standard form of contract
Procedia PDF Downloads 26611362 Multi-source Question Answering Framework Using Transformers for Attribute Extraction
Authors: Prashanth Pillai, Purnaprajna Mangsuli
Abstract:
Oil exploration and production companies invest considerable time and efforts to extract essential well attributes (like well status, surface, and target coordinates, wellbore depths, event timelines, etc.) from unstructured data sources like technical reports, which are often non-standardized, multimodal, and highly domain-specific by nature. It is also important to consider the context when extracting attribute values from reports that contain information on multiple wells/wellbores. Moreover, semantically similar information may often be depicted in different data syntax representations across multiple pages and document sources. We propose a hierarchical multi-source fact extraction workflow based on a deep learning framework to extract essential well attributes at scale. An information retrieval module based on the transformer architecture was used to rank relevant pages in a document source utilizing the page image embeddings and semantic text embeddings. A question answering framework utilizingLayoutLM transformer was used to extract attribute-value pairs incorporating the text semantics and layout information from top relevant pages in a document. To better handle context while dealing with multi-well reports, we incorporate a dynamic query generation module to resolve ambiguities. The extracted attribute information from various pages and documents are standardized to a common representation using a parser module to facilitate information comparison and aggregation. Finally, we use a probabilistic approach to fuse information extracted from multiple sources into a coherent well record. The applicability of the proposed approach and related performance was studied on several real-life well technical reports.Keywords: natural language processing, deep learning, transformers, information retrieval
Procedia PDF Downloads 19311361 Using Eye-Tracking Technology to Understand Consumers’ Comprehension of Multimedia Health Information
Authors: Samiullah Paracha, Sania Jehanzeb, M. H. Gharanai, A. R. Ahmadi, H.Sokout, Toshiro Takahara
Abstract:
The purpose of this study is to examine how health consumers utilize pictures when developing an understanding of multimedia health documents, and whether attentional processes, measured by eye-tracking, relate to differences in health-related cognitive resources and passage comprehension. To investigate these issues, we will present health-related text-picture passages to elders and collect eye movement data to measure readers’ looking behaviors.Keywords: multimedia, eye-tracking, consumer health informatics, human-computer interaction
Procedia PDF Downloads 34111360 Human Capital Discourse and Higher Education Policy
Authors: Tien-Hui Chiang
Abstract:
Human capital discourse encourages many countries to expand the capacity of HEIs. Along with this expansion, the higher education system is redefined as a free market and in turn it is privatized and commercialized. However, the state’s role in education is to balance social justice and capital accumulation. This role is further regulated by a specific form of neoliberalism constituted by social contexts. These correlations call for exploring the influence of human capital discourse on interwoven issues, such as the state’s role in education, higher education policy, and employability. Method: According to the perspective of neoliberal governmentality, answers to the above four research questions are likely to be embedded within discourses in documents related to higher education policies. Consequently, this study adopts a qualitative approach by analyzing official documents, including government reports, official statistics, circulars and official statements. Documents were collected and subjected to content analysis, with a particular focus on the period from 2005 to 2021. The technique of content analysis was applied to decode keywords and core concepts of these documents. Findings: Neoliberalism is exerted through human capital discourse in China particularly in the changes in higher education policies moving from quantitative expansion to quality control via employment or employability. Such changes highlight that the principle of “n”eoliberalism is more suitable for illustrating the practice of free market logic in different social contexts. The modifications of neoliberalism adopted by the Chinese government reflect that the state’s mission is to secure social security or the common good, so that public managerialism - in the form of programs for employment, internship and entrepreneurship - is adopted in the name of the public interest and the collective mission. Public managerialism now is not only targeted towards social institutions but the population more generally, incarnated here by college graduates. Its practice is not only to renovate organizational cultures but to activate people’s commitment to national development.Keywords: employability, higher education expansion, neoliberalism, human capital discourse
Procedia PDF Downloads 7911359 A Temporal QoS Ontology For ERTMS/ETCS
Authors: Marc Sango, Olimpia Hoinaru, Christophe Gransart, Laurence Duchien
Abstract:
Ontologies offer a means for representing and sharing information in many domains, particularly in complex domains. For example, it can be used for representing and sharing information of System Requirement Specification (SRS) of complex systems like the SRS of ERTMS/ETCS written in natural language. Since this system is a real-time and critical system, generic ontologies, such as OWL and generic ERTMS ontologies provide minimal support for modeling temporal information omnipresent in these SRS documents. To support the modeling of temporal information, one of the challenges is to enable representation of dynamic features evolving in time within a generic ontology with a minimal redesign of it. The separation of temporal information from other information can help to predict system runtime operation and to properly design and implement them. In addition, it is helpful to provide a reasoning and querying techniques to reason and query temporal information represented in the ontology in order to detect potential temporal inconsistencies. Indeed, a user operation, such as adding a new constraint on existing planning constraints can cause temporal inconsistencies, which can lead to system failures. To address this challenge, we propose a lightweight 3-layer temporal Quality of Service (QoS) ontology for representing, reasoning and querying over temporal and non-temporal information in a complex domain ontology. Representing QoS entities in separated layers can clarify the distinction between the non QoS entities and the QoS entities in an ontology. The upper generic layer of the proposed ontology provides an intuitive knowledge of domain components, specially ERTMS/ETCS components. The separation of the intermediate QoS layer from the lower QoS layer allows us to focus on specific QoS Characteristics, such as temporal or integrity characteristics. In this paper, we focus on temporal information that can be used to predict system runtime operation. To evaluate our approach, an example of the proposed domain ontology for handover operation, as well as a reasoning rule over temporal relations in this domain-specific ontology, are given.Keywords: system requirement specification, ERTMS/ETCS, temporal ontologies, domain ontologies
Procedia PDF Downloads 42211358 Building Climate Resilience in the Health Sector in Developing Countries: Experience from Tanzania
Authors: Hussein Lujuo Mohamed
Abstract:
Introduction: Public health has always been influenced by climate and weather. Changes in climate and climate variability, particularly changes in weather extremes affect the environment that provides people with clean air, food, water, shelter, and security. Tanzania is not an exception to the threats of climate change. The health sector is mostly affected due to emergence and proliferation of infectious diseases, thereby affecting health of the population and thus impacting achievement of sustainable development goals. Methodology: A desk review on documented issues pertaining to climate change and health in Tanzania was done using Google search engine. Keywords included climate change, link, health, climate initiatives. In cases where information was not available, documents from Ministry of Health, Vice Presidents Office-Environment, Local Government Authority, Ministry of Water, WHO, research, and training institutions were reviewed. Some of the reviewed documents from these institutions include policy brief papers, fieldwork activity reports, training manuals, and guidelines. Results: Six main climate resilience activities were identified in Tanzania. These were development and implementation of climate resilient water safety plans guidelines both for rural and urban water authorities, capacity building of rural and urban water authorities on implementation of climate-resilient water safety plans, and capacity strengthening of local environmental health practitioners on mainstreaming climate change and health into comprehensive council health plans. Others were vulnerability and adaptation assessment for the health sector, mainstreaming climate change in the National Health Policy, and development of risk communication strategy on climate. In addition information, education, and communication materials on climate change and to create awareness were developed aiming to sensitize and create awareness among communities on climate change issues and its effect on public health. Conclusion: Proper implementation of these interventions will help the country become resilient to many impacts of climate change in the health sector and become a good example for other least developed countries.Keywords: climate, change, Tanzania, health
Procedia PDF Downloads 12111357 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System
Authors: Dong Seop Lee, Byung Sik Kim
Abstract:
In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.Keywords: disaster information management, unstructured data, optical character recognition, machine learning
Procedia PDF Downloads 13011356 Heritage Management Planning, Stakeholders and Legal Problematic: The Case of the Archeological Site of Jarash in Jordan
Authors: Abdelkader Ababneh
Abstract:
Heritage management planning is increasingly important throughout the international context, particularly in the developing countries. Jordan has important and unique heritage resources due to its natural topography and climate, but also to its history and old sites. A high number of these archaeological sites are in very good state of preservation. Most natural sites and resources are privately managed while archaeological heritage sites are publicly managed within national legal texts and with some referencing to international legal documents. This study examines the development of cultural heritage management in Jarash, and questions if this heritage has been managed in an appropriate manner. The purpose of this paper is to define and review the stakeholders in charge of the management of the archaeological site of Jarash, the legal texts, laws and documents adopted to apply the site management. Relations and coordination between stakeholders and the challenge of the planning process is also the focus of this paper. A review of pertinent academic, technical studies, reports and projects literature pertaining to the heritage management planning in general and related to the site of Jarash in particular coupled with field study of the site served as the background of the information base for the study. Current context of actors, legislative framework, planning policies and initiatives for the site of Jarash reveal important and continuous challenge for managing the site. Recommendations suggest reviewing and restructuring the entity responsible of the sites management. It is also recommended to review their applied policies and a redevelopment of the legislative frame work.Keywords: heritage management, stakeholders, legal protection, Jarash
Procedia PDF Downloads 37911355 DURAFILE: A Collaborative Tool for Preserving Digital Media Files
Authors: Santiago Macho, Miquel Montaner, Raivo Ruusalepp, Ferran Candela, Xavier Tarres, Rando Rostok
Abstract:
During our lives, we generate a lot of personal information such as photos, music, text documents and videos that link us with our past. This data that used to be tangible is now digital information stored in our computers, which implies a software dependence to make them accessible in the future. Technology, however, constantly evolves and goes through regular shifts, quickly rendering various file formats obsolete. The need for accessing data in the future affects not only personal users but also organizations. In a digital environment, a reliable preservation plan and the ability to adapt to fast changing technology are essential for maintaining data collections in the long term. We present in this paper the European FP7 project called DURAFILE that provides the technology to preserve media files for personal users and organizations while maintaining their quality.Keywords: artificial intelligence, digital preservation, social search, digital preservation plans
Procedia PDF Downloads 44511354 ArcGIS as a Tool for Infrastructure Documentation and Asset Management: Establishing a GIS for Computer Network Documentation
Authors: John Segars
Abstract:
Built out of a real-world need to have better, more detailed, asset and infrastructure documentation, this project will lay out the case for using the database functionality of ArcGIS as a tool to track and maintain infrastructure location, status, maintenance and serviceability. Workflows and processes will be presented and detailed which may be applied to an organizations’ infrastructure needs that might allow them to make use of the robust tools which surround the ArcGIS platform. The end result is a value-added information system framework with a geographic component e.g., the spatial location of various I.T. assets, a detailed set of records which not only documents location but also captures the maintenance history for assets along with photographs and documentation of these various assets as attachments to the numerous feature class items. In addition to the asset location and documentation benefits, the staff will be able to log into the devices and pull SNMP (Simple Network Management Protocol) based query information from within the user interface. The entire collection of information may be displayed in ArcGIS, via a JavaScript based web application or via queries to the back-end database. The project is applicable to all organizations which maintain an IT infrastructure but specifically targets post-secondary educational institutions where access to ESRI resources is generally already available in house.Keywords: ESRI, GIS, infrastructure, network documentation, PostgreSQL
Procedia PDF Downloads 18111353 The Challenges of Implementing Building Information Modeling in Small-Medium Enterprises Architecture Firms in Indonesia
Authors: Furry A. Wilis, Dewi Larasati, Suhendri
Abstract:
Around 96% of architecture firms in Indonesia are classified as small-medium enterprises (SME). This number shows that the SME firms have an important role in architecture, engineering, and construction (AEC) industry in Indonesia. Some of them are still using conventional system (2D based) in arranging construction project documents. This system is fragmented and not fully well-coordinated, so causes many changes in the whole project cycle. Building information modeling (BIM), as a new developed system in Indonesian construction industry, has been assumed can decrease changes in the project. But BIM has not fully implemented in Indonesian AEC industry, especially in SME architecture firms. This article identifies the challenges of implementing BIM in SME architecture firms in Indonesia. Quantitative-explorative research with questionnaire was chosen to achieve the goal of this article. The scarcity of skilled BIM user, low demand from client, high investment cost, and the unwillingness of the firm to switch into BIM were found as the result of this paper.Keywords: architecture consultants, BIM, SME, Indonesia
Procedia PDF Downloads 34311352 A Cross-Sectional Study Assessing Communication Practices among Doctors at a University Hospital in Pakistan
Authors: Muhammad Waqas Baqai, Noman Shahzad, Rehman Alvi
Abstract:
Communication among health care givers is the essence of quality patient care and any compromise results in errors and inefficiency leading to cumbersome outcomes. The use of smartphone among health professionals has increased tremendously. Almost every health professional carries it and majority of them uses a third party communication software called whatsApp for work related communications. It gives instant access to the person responsible for any particular query and therefore helps in efficient and timely decision making. It is also an easy way of sharing medical documents, multimedia and provides platform for consensual decision making through group discussions. However clinical communication through whatsApp has some demerits too including reduction in verbal communication, worsening professional relations, unprofessional behavior, risk of confidentiality breach and threats from cyber-attacks. On the other hand the traditional pager device being used in many health care systems is a unidirectional communication that lacks the ability to convey any information other than the number to which the receiver has to respond. Our study focused on these two widely used modalities of communication among doctors of the largest tertiary care center of Pakistan i.e. The Aga Khan University Hospital. Our aim was to note which modality is considered better and has fewer threats to medical data. Approval from ethical review committee of the institute was taken prior to conduction of this study. We submitted an online survey form to all the interns and residents working at our institute and collected their response in a month’s time. 162 submissions were recorded and analyzed using descriptive statistics. Only 20% of them were comfortable with using pagers exclusively, 52% with whatsApp and 28% with both. 65% think that whatsApp is time-saving and quicker than pager. 54% of them considered whatsApp to be causing nuisance from work related notifications in their off-work hours. 60% think that they are more likely to miss information through pager system because of the unidirectional nature. Almost all (96%) of residents and interns found whatsApp to be useful in terms of saving information for future reference. For urgent issues, majority (70%) preferred pager over whatsApp and also pager was considered more valid in terms of hospital policies and legal issues. Among major advantages of whatsApp as listed by them were; easy mass communication, sharing of clinical pictures, universal access and no need of carrying additional device. However the major drawback of using whatsApp for clinical communication that everyone shared was threat to patients’ confidentiality as clinicians usually share pictures of wounds, clinical documents etc. Lastly we asked them if they think there is a need of a separate application for instant communication dedicated to clinical communication only and 90% responded positively. Therefore, we concluded that both modalities have their merits and demerits but the greatest drawback with whatsApp is the risk of breach in patients’ confidentiality and off-work disturbance. Hence, we recommend a more secure, institute-run application for all intra hospital communications where they can share documents, pictures etc. easily under a controlled environment.Keywords: WhatsApp, pager, clinical communication, confidentiality
Procedia PDF Downloads 14711351 Leveraging Natural Language Processing for Legal Artificial Intelligence: A Longformer Approach for Taiwanese Legal Cases
Abstract:
Legal artificial intelligence (LegalAI) has been increasing applications within legal systems, propelled by advancements in natural language processing (NLP). Compared with general documents, legal case documents are typically long text sequences with intrinsic logical structures. Most existing language models have difficulty understanding the long-distance dependencies between different structures. Another unique challenge is that while the Judiciary of Taiwan has released legal judgments from various levels of courts over the years, there remains a significant obstacle in the lack of labeled datasets. This deficiency makes it difficult to train models with strong generalization capabilities, as well as accurately evaluate model performance. To date, models in Taiwan have yet to be specifically trained on judgment data. Given these challenges, this research proposes a Longformer-based pre-trained language model explicitly devised for retrieving similar judgments in Taiwanese legal documents. This model is trained on a self-constructed dataset, which this research has independently labeled to measure judgment similarities, thereby addressing a void left by the lack of an existing labeled dataset for Taiwanese judgments. This research adopts strategies such as early stopping and gradient clipping to prevent overfitting and manage gradient explosion, respectively, thereby enhancing the model's performance. The model in this research is evaluated using both the dataset and the Average Entropy of Offense-charged Clustering (AEOC) metric, which utilizes the notion of similar case scenarios within the same type of legal cases. Our experimental results illustrate our model's significant advancements in handling similarity comparisons within extensive legal judgments. By enabling more efficient retrieval and analysis of legal case documents, our model holds the potential to facilitate legal research, aid legal decision-making, and contribute to the further development of LegalAI in Taiwan.Keywords: legal artificial intelligence, computation and language, language model, Taiwanese legal cases
Procedia PDF Downloads 7311350 Determine the Optimal Path of Content Adaptation Services with Max Heap Tree
Authors: Shilan Rahmani Azr, Siavash Emtiyaz
Abstract:
Recent development in computing and communicative technologies leads to much easier mobile accessibility to the information. Users can access to the information in different places using various deceives in which the care variety of abilities. Meanwhile, the format and details of electronic documents are changing each day. In these cases, a mismatch is created between content and client’s abilities. Recently the service-oriented content adaption has been developed which the adapting tasks are dedicated to some extended services. In this method, the main problem is to choose the best appropriate service among accessible and distributed services. In this paper, a method for determining the optimal path to the best services, based on the quality control parameters and user preferences, is proposed using max heap tree. The efficiency of this method in contrast to the other previous methods of the content adaptation is related to the determining the optimal path of the best services which are measured. The results show the advantages and progresses of this method in compare of the others.Keywords: service-oriented content adaption, QoS, max heap tree, web services
Procedia PDF Downloads 26011349 Information Literacy: Concept and Importance
Authors: Gaurav Kumar
Abstract:
An information literate person is one who uses information effectively in all its forms. When presented with questions or problems, an information literate person would know what information to look for, how to search efficiently and be able to access relevant sources. In addition, an information literate person would have the ability to evaluate and select appropriate information sources and to use the information effectively and ethically to answer questions or solve problems. Information literacy has become an important element in higher education. The information literacy movement has internationally recognized standards and learning outcomes. The step-by-step process of achieving information literacy is particularly crucial in an era where knowledge could be disseminated through a variety of media. What is the relationship between information literacy as we define it in higher education and information literacy among non-academic populations? What forces will change how we think about the definition of information literacy in the future and how we will apply the definition in all environments?Keywords: information literacy, human beings, visual media and computer network etc, information literacy
Procedia PDF Downloads 34011348 The Outcome of the Discontinuation of Cheques on Bank Reconciliation
Authors: Estelle Abrahams, Tania Pretorius
Abstract:
A joint media statement by the South African Reserve Bank, the Banking Association of South Africa, the Financial Sector Conduct Authority, and the Payments Association of South Africa was recently published, stating that the receipt or acceptance of cheques will terminate effectively on 31 December 2020. All stakeholders are urged to cease accepting or issuing cheques as a payment method. The purpose of the study is to examine the effect that the discontinuation of the usage of cheques has on bank reconciliations for the subject: economic and management sciences. A literature study was performed to gain insight into the bank reconciliation process to be able to draw conclusions on the outcome of the discontinuation of cheques on the bank reconciliation. The study found that the teaching of the bank reconciliation process will change to introduce new replacement source documents for digital payments, and this impacts the teaching of reconciling differences.Keywords: bank reconciliation, internal control, accounting education, source documents
Procedia PDF Downloads 11211347 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 3111346 Literature Review on Text Comparison Techniques: Analysis of Text Extraction, Main Comparison and Visual Representation Tools
Authors: Andriana Mkrtchyan, Vahe Khlghatyan
Abstract:
The choice of a profession is one of the most important decisions people make throughout their life. With the development of modern science, technologies, and all the spheres existing in the modern world, more and more professions are being arisen that complicate even more the process of choosing. Hence, there is a need for a guiding platform to help people to choose a profession and the right career path based on their interests, skills, and personality. This review aims at analyzing existing methods of comparing PDF format documents and suggests that a 3-stage approach is implemented for the comparison, that is – 1. text extraction from PDF format documents, 2. comparison of the extracted text via NLP algorithms, 3. comparison representation using special shape and color psychology methodology.Keywords: color psychology, data acquisition/extraction, data augmentation, disambiguation, natural language processing, outlier detection, semantic similarity, text-mining, user evaluation, visual search
Procedia PDF Downloads 7911345 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 10111344 Education Quality Development for Excellence Performance with Higher Education by Using COBIT 5
Authors: Kemkanit Sanyanunthana
Abstract:
The purpose of this research is to study the management system of information technology which supports the education of five private universities in Thailand, according to the case studies which have been developing their qualities and standards of management and education by service provision of information technology to support the excellence performance. The concept to connect information technology with a suitable system has been created by information technology administrators for development, as a system that can be used throughout the organizations to help reach the utmost benefits of using all resources. Hence, the researcher as a person who has been performing these duties within higher education is interested to do this research by selecting the Control Objective for Information and Related Technology 5 (COBIT 5) for the Malcolm Baldrige National Quality Award (MBNQA) of America, or the National Award which applies the concept of Total Quality Management (TQM) to the organization evaluation. Such evaluation is called the Education Criteria for Performance Excellence (EdPEx) focuses on studying and comparing education quality development for excellent performance using COBIT 5 in terms of information technology to study the problems and obstacles of the investigation process for an information technology system, which is considered as an instrument to drive all organizations to reach the excellence performance of the information technology, and to be the model of evaluation and analysis of the process to be in accordance with the strategic plans of the information technology in the universities. This research is conducted in the form of descriptive and survey research according to the case studies. The data collection were carried out by using questionnaires through the administrators working related to the information technology field, and the research documents related to the change management as the main study. The research can be concluded that the performance based on the APO domain process (ALIGN, PLAN AND ORGANISE) of the COBIT 5 standard frame, which emphasizes concordant governance and management of strategic plans for the organizations, could reach only 95%. This might be because of some restrictions such as organizational cultures; therefore, the researcher has studied and analyzed the management of information technology in universities as a whole, under the organizational structures, to reach the performance in accordance with the overall APO domain which would affect the determined strategic plans to be able to develop based on the excellence performance of information technology, and to apply the risk management system at the organizational level into every performance process which would develop the work effectiveness for the resources management of information technology to reach the utmost benefits.
Keywords: COBIT5, APO, EdPEx Criteria, MBNQA
Procedia PDF Downloads 32611343 Interoperability Standard for Data Exchange in Educational Documents in Professional and Technological Education: A Comparative Study and Feasibility Analysis for the Brazilian Context
Authors: Giovana Nunes Inocêncio
Abstract:
The professional and technological education (EPT) plays a pivotal role in equipping students for specialized careers, and it is imperative to establish a framework for efficient data exchange among educational institutions. The primary focus of this article is to address the pressing need for document interoperability within the context of EPT. The challenges, motivations, and benefits of implementing interoperability standards for digital educational documents are thoroughly explored. These documents include EPT completion certificates, academic records, and curricula. In conjunction with the prior abstract, it is evident that the intersection of IT governance and interoperability standards holds the key to transforming the landscape of technical education in Brazil. IT governance provides the strategic framework for effective data management, aligning with educational objectives, ensuring compliance, and managing risks. By adopting interoperability standards, the technical education sector in Brazil can facilitate data exchange, enhance data security, and promote international recognition of qualifications. The utilization of the XML (Extensible Markup Language) standard further strengthens the foundation for structured data exchange, fostering efficient communication, standardization of curricula, and enhancing educational materials. The IT governance, interoperability standards, and data management critical role in driving the quality, efficiency, and security of technical education. The adoption of these standards fosters transparency, stakeholder coordination, and regulatory compliance, ultimately empowering the technical education sector to meet the dynamic demands of the 21st century.Keywords: interoperability, education, standards, governance
Procedia PDF Downloads 71