Search results for: Document similarity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 627

Search results for: Document similarity

57 Facilitation of Digital Culture and Creativity through an Ideation Strategy: A Case Study with an Incumbent Automotive Manufacturer

Authors: K. Ö. Kartal, L. Maul, M. Hägele

Abstract:

With the development of new technologies come additional opportunities for the founding of companies and new markets to be created. The barriers to entry are lowered and technology makes old business models obsolete. Incumbent companies have to be adaptable to this quickly changing environment. They have to start the process of digital maturation and they have to be able to adapt quickly to new and drastic changes that might arise. One of the biggest barriers for organizations in order to do so is their culture. This paper shows the core elements of a corporate culture that supports the process of digital maturation in incumbent organizations. Furthermore, it is explored how ideation and innovation can be used in a strategy in order to facilitate these core elements of culture that promote digital maturity. Focus areas are identified for the design of ideation strategies, with the aim to make the facilitation and incitation process more effective, short to long term. Therefore, one in-depth case study is conducted with data collection from interviews, observation, document review and surveys. The findings indicate that digital maturity is connected to cultural shift and 11 relevant elements of digital culture are identified which have to be considered. Based on these 11 core elements, five focus areas that need to be regarded in the design of a strategy that uses ideation and innovation to facilitate the cultural shift are identified. These are: Focus topics, rewards and communication, structure and frequency, regions and new online formats.

Keywords: Digital transformation, innovation management, ideation strategy, creativity culture, change.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1130
56 Empirical Evidence on Equity Valuation of Thai Firms

Authors: Somchai Supattarakul, Anya Khanthavit

Abstract:

This study aims at providing empirical evidence on a comparison of two equity valuation models: (1) the dividend discount model (DDM) and (2) the residual income model (RIM), in estimating equity values of Thai firms during 1995-2004. Results suggest that DDM and RIM underestimate equity values of Thai firms and that RIM outperforms DDM in predicting cross-sectional stock prices. Results on regression of cross-sectional stock prices on the decomposed DDM and RIM equity values indicate that book value of equity provides the greatest incremental explanatory power, relative to other components in DDM and RIM terminal values, suggesting that book value distortions resulting from accounting procedures and choices are less severe than forecast and measurement errors in discount rates and growth rates. We also document that the incremental explanatory power of book value of equity during 1998-2004, representing the information environment under Thai Accounting Standards reformed after the 1997 economic crisis to conform to International Accounting Standards, is significantly greater than that during 1995-1996, representing the information environment under the pre-reformed Thai Accounting Standards. This implies that the book value distortions are less severe under the 1997 Reformed Thai Accounting Standards than the pre-reformed Thai Accounting Standards.

Keywords: Dividend Discount Model, Equity Valuation Model, Residual Income Model, Thai Stock Market

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
55 Q-Map: Clinical Concept Mining from Clinical Documents

Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala

Abstract:

Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.

Keywords: Information retrieval (IR), unified medical language system (UMLS), Syntax Based Analysis, natural language processing (NLP), medical informatics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 735
54 Environmental Decision Making Model for Assessing On-Site Performances of Building Subcontractors

Authors: Buket Metin

Abstract:

Buildings cause a variety of loads on the environment due to activities performed at each stage of the building life cycle. Construction is the first stage that affects both the natural and built environments at different steps of the process, which can be defined as transportation of materials within the construction site, formation and preparation of materials on-site and the application of materials to realize the building subsystems. All of these steps require the use of technology, which varies based on the facilities that contractors and subcontractors have. Hence, environmental consequences of the construction process should be tackled by focusing on construction technology options used in every step of the process. This paper presents an environmental decision-making model for assessing on-site performances of subcontractors based on the construction technology options which they can supply. First, construction technologies, which constitute information, tools and methods, are classified. Then, environmental performance criteria are set forth related to resource consumption, ecosystem quality, and human health issues. Finally, the model is developed based on the relationships between the construction technology components and the environmental performance criteria. The Fuzzy Analytical Hierarchy Process (FAHP) method is used for weighting the environmental performance criteria according to environmental priorities of decision-maker(s), while the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method is used for ranking on-site environmental performances of subcontractors using quantitative data related to the construction technology components. Thus, the model aims to provide an insight to decision-maker(s) about the environmental consequences of the construction process and to provide an opportunity to improve the overall environmental performance of construction sites.

Keywords: Construction process, construction technology, decision making, environmental performance, subcontractors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1131
53 Localizing and Recognizing Integral Pitches of Cheque Document Images

Authors: Bremananth R., Veerabadran C. S., Andy W. H. Khong

Abstract:

Automatic reading of handwritten cheque is a computationally complex process and it plays an important role in financial risk management. Machine vision and learning provide a viable solution to this problem. Research effort has mostly been focused on recognizing diverse pitches of cheques and demand drafts with an identical outline. However most of these methods employ templatematching to localize the pitches and such schemes could potentially fail when applied to different types of outline maintained by the bank. In this paper, the so-called outline problem is resolved by a cheque information tree (CIT), which generalizes the localizing method to extract active-region-of-entities. In addition, the weight based density plot (WBDP) is performed to isolate text entities and read complete pitches. Recognition is based on texture features using neural classifiers. Legal amount is subsequently recognized by both texture and perceptual features. A post-processing phase is invoked to detect the incorrect readings by Type-2 grammar using the Turing machine. The performance of the proposed system was evaluated using cheque and demand drafts of 22 different banks. The test data consists of a collection of 1540 leafs obtained from 10 different account holders from each bank. Results show that this approach can easily be deployed without significant design amendments.

Keywords: Cheque reading, Connectivity checking, Text localization, Texture analysis, Turing machine, Signature verification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620
52 Analysis of Developments in the Understanding of In-Service Training in Turkish Public Administration: Personnel Management to Human Resource Management

Authors: Sema Müge Özdemiray

Abstract:

In line with the new public management approach to provide effective and efficient services necessary to achieve the social goals of public institutions, employees must have the knowledge and skills required by the age. In conjunction with the transition from personnel management to human resources management, it is seen that there is a change in the understanding of in-service training, the understanding of "required in-service training" has switched to the understanding of "continuous in-service training". However, in terms of in-service training in Turkey, it seems to be trouble at the point of adopting to change. The main purpose of this study is to primarily create a conceptual framework of in-service training and subsequently determine, analyze and discuss the developments and problems faced by in-service training in Turkey in the transition from personnel management to human resources management. In accordance with this purpose, the necessary data of this study were collected using qualitative approaches. Observation and document analysis was used and content analysis was performed on the data gathered in the study. The results of this study, according to data such as the number of institutions requesting in-service training, allocated budget of in-service training, the number of people participating in such training, transition of personnel management to human resources management should not lead to a paradigm shift in Turkey’s understanding of in-service training, although this is compulsory for public institutions in accordance with the law in Turkey. In-service training in Turkish public administration is still not implemented effectively and is seen as a social activity for employees and a formality for institutions.

Keywords: Human resources management, in-service training, personnel management, public institutions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 992
51 A Fuzzy MCDM Approach for Health-Care Waste Management

Authors: Mehtap Dursun, E. Ertugrul Karsak, Melis Almula Karadayi

Abstract:

The management of the health-care wastes is one of the most important problems in Istanbul, a city with more than 12 million inhabitants, as it is in most of the developing countries. Negligence in appropriate treatment and final disposal of the healthcare wastes can lead to adverse impacts to public health and to the environment. This paper employs a fuzzy multi-criteria group decision making approach, which is based on the principles of fusion of fuzzy information, 2-tuple linguistic representation model, and technique for order preference by similarity to ideal solution (TOPSIS), to evaluate health-care waste (HCW) treatment alternatives for Istanbul. The evaluation criteria are determined employing nominal group technique (NGT), which is a method of systematically developing a consensus of group opinion. The employed method is apt to manage information assessed using multigranularity linguistic information in a decision making problem with multiple information sources. The decision making framework employs ordered weighted averaging (OWA) operator that encompasses several operators as the aggregation operator since it can implement different aggregation rules by changing the order weights. The aggregation process is based on the unification of information by means of fuzzy sets on a basic linguistic term set (BLTS). Then, the unified information is transformed into linguistic 2-tuples in a way to rectify the problem of loss information of other fuzzy linguistic approaches.

Keywords: Group decision making, health care waste management, multi-criteria decision making, OWA, TOPSIS, 2-tuple linguistic representation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2367
50 Formal Thai National Costume in the Reign of King Bhumibol Adulyadej

Authors: Chanoknart Mayusoh

Abstract:

The research about Formal Thai National Costume in the reign of King Bhumibol Adulyadej is an applied research that aimed to study the accurate knowledge concerning to Thai national costume in the reign of King Rama IX, also to study origin of all costumes in the reign of King Rama IX and to study the style, material used, and using accasion. This research methodology which are collect quanlitative data through observation, document, and photograph from key informant of costume in the reign of King Rama IX and from another who related to this field.

The formal Thai national costume of the reign of King Bhumibol Adulyadej originated from the visit of His Majesty the King to Europe and America in 1960. Since Thailand had no traditional national costume; Her Majesty the Queen initiated the idea to create formal Thai national costumes. In 1964, Her Majesty the Queen selected 8 styles of formal Thai national costume. Later, Her Majesty the Queen confered another 3 formal Thai national costume for men. There are 8 styles of formal Thai national costume for women: Thai Ruean Ton, Thai Chit Lada, Thai Amarin, Thai Borom Phiman, Thai Siwalia, Thai Chakkri, Thai Dusit, and Thai Chakkraphat. There are 3 styles of formal Thai national costume for men: short-sleeve shirt, long-sleeve shirt, and long-sleeve shirt with breechcloth. The costume is widely used in formal ceremony such as greeting ceremony for official foreign visitors, wedding ceremony, or other auspicious ceremonies. Now a day, they are always used as a bridal gown as well. The formal Thai national costume is valuable art that shows Thai identity and, should be preserved for the next generation.

Keywords: The formal Thai national costume for women, The formal Thai national costume for men, His Majesty King Bhumibol Adulyadej the Great King Rama IX, Her Majesty Queen Sirikit Queen.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4401
49 Comparative Parametric and Emission Characteristics of Single Cylinder Spark Ignition Engine Using Gasoline, Ethanol, and H₂O as Micro Emulsion Fuels

Authors: Ufaith Qadri, M Marouf Wani

Abstract:

In this paper, the performance and emission characteristics of a Single Cylinder Spark Ignition engine have been investigated. The research is based on micro emulsion application as fuel in a gasoline engine. We have analyzed many micro emulsion compositions in various proportions, for predicting the performance of the Spark Ignition engine. This new technology of fuel modifications is emerging very rapidly as lot of research is going on in the field of micro emulsion fuels in Compression Ignition engines, but the micro emulsion fuel used in a Gasoline engine is very rare. The use of micro emulsion as fuel in a Spark Ignition engine is virtually unexplored. So, our main goal is to see the performance and emission characteristics of micro emulsions as fuel, in Spark Ignition engines, and finding which composition is more efficient. In this research, we have used various micro emulsion fuels whose composition varies for all the three blends, and their performance and emission characteristic were predicted in AVL Boost software. Conventional Gasoline fuel 90%, 80% and 85% were blended with co-surfactant Ethanol in different compositions, and water was used as an additive for making it crystal clear transparent micro emulsion fuel, which is thermodynamically stable. By comparing the performances of engines, the power has shown similarity for micro emulsion fuel and conventional Gasoline fuel. On the other hand, Torque and BMEP shows increase for all the micro emulsion fuels. Micro emulsion fuel shows higher thermal efficiency and lower Specific Fuel Consumption for all the compositions as compared to the Gasoline fuel. Carbon monoxide and Hydro carbon emissions were also measured. The result shows that emissions decrease for all the composition of micro emulsion fuels, and proved to be the most efficient fuel both in terms of performance and emission characteristics.

Keywords: AVL Boost, emissions, micro emulsion, performance, SI engine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 812
48 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things

Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker

Abstract:

Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.

Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 955
47 Iris Recognition Based On the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: Iris recognition, contrast stretching, gradient features, texture features, Euclidean metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1923
46 Multi-Scale Gabor Feature Based Eye Localization

Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Dusik Oh, Jaemin Kim, Seongwon Cho

Abstract:

Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported so far still need to be improved about precision and computational time for successful applications. In this paper, we propose an eye location method based on multi-scale Gabor feature vectors, which is more robust with respect to initial points. The eye localization based on Gabor feature vectors first needs to constructs an Eye Model Bunch for each eye (left or right eye) which consists of n Gabor jets and average eye coordinates of each eyes obtained from n model face images, and then tries to localize eyes in an incoming face image by utilizing the fact that the true eye coordinates is most likely to be very close to the position where the Gabor jet will have the best Gabor jet similarity matching with a Gabor jet in the Eye Model Bunch. Similar ideas have been already proposed in such as EBGM (Elastic Bunch Graph Matching). However, the method used in EBGM is known to be not robust with respect to initial values and may need extensive search range for achieving the required performance, but extensive search ranges will cause much more computational burden. In this paper, we propose a multi-scale approach with a little increased computational burden where one first tries to localize eyes based on Gabor feature vectors in a coarse face image obtained from down sampling of the original face image, and then localize eyes based on Gabor feature vectors in the original resolution face image by using the eye coordinates localized in the coarse scaled image as initial points. Several experiments and comparisons with other eye localization methods reported in the other papers show the efficiency of our proposed method.

Keywords: Eye Localization, Gabor features, Multi-scale, Gabor wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1788
45 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing

Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca De Marchi

Abstract:

This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes a larger monitored area available. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary, the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.

Keywords: Data compression, ultrasonic communication, guided waves, FEM analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 317
44 Disparities versus Similarities: WHO GPPQCL and ISO/IEC 17025:2017 International Standards for Quality Management Systems in Pharmaceutical Laboratories

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn, P. Shivanand

Abstract:

Medicines regulatory authorities expect pharmaceutical companies and contract research organizations to seek ways to certify that their laboratory control measurements are reliable. Establishing and maintaining laboratory quality standards are essential in ensuring the accuracy of test results. ‘ISO/IEC 17025:2017’ and ‘WHO Good Practices for Pharmaceutical Quality Control Laboratories (GPPQCL)’ are two quality standards commonly employed in developing laboratory quality systems. A review was conducted on the two standards to elaborate on areas on convergence and divergence. The goal was to understand how differences in each standard's requirements may influence laboratories' choices as to which document is easier to adopt for quality systems. A qualitative review method compared similar items in the two standards while mapping out areas where there were specific differences in the requirements of the two documents. The review also provided a detailed description of the clauses and parts covering management and technical requirements in these laboratory standards. The review showed that both documents share requirements for over ten critical areas covering objectives, infrastructure, management systems, and laboratory processes. There were, however, differences in standard expectations where GPPQCL emphasizes system procedures for planning and future budgets that will ensure continuity. Conversely, ISO 17025 was more focused on the risk management approach to establish laboratory quality systems. Elements in the two documents form common standard requirements to assure the validity of laboratory test results that promote mutual recognition. The ISO standard currently has more global patronage than GPPQCL.

Keywords: ISO/IEC 17025:2017, laboratory standards, quality control, WHO GPPQCL

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1004
43 Computational Methods in Official Statistics with an Example on Calculating and Predicting Diabetes Mellitus [DM] Prevalence in Different Age Groups within Australia in Future Years, in Light of the Aging Population

Authors: D. Hilton

Abstract:

An analysis of the Australian Diabetes Screening Study estimated undiagnosed diabetes mellitus [DM] prevalence in a high risk general practice based cohort. DM prevalence varied from 9.4% to 18.1% depending upon the diagnostic criteria utilised with age being a highly significant risk factor. Utilising the gold standard oral glucose tolerance test, the prevalence of DM was 22-23% in those aged >= 70 years and <15% in those aged 40-59 years. Opportunistic screening in Australian general practice potentially can identify many persons with undiagnosed type 2 DM. An Australian Bureau of Statistics document published three years ago, reported the highest rate of DM in men aged 65-74 years [19%] whereas the rate for women was highest in those over 75 years [13%]. If you consider that the Australian Bureau of Statistics report in 2007 found that 13% of the population was over 65 years of age and that this will increase to 23-25% by 2056 with a further projected increase to 25-28% by 2101, obviously this information has to be factored into the equation when age related diabetes prevalence predictions are calculated. This 10-15% proportional increase of elderly persons within the population demographics has dramatic implications for the estimated number of elderly persons with DM in these age groupings. Computational methodology showing the age related demographic changes reported in these official statistical documents will be done showing estimates for 2056 and 2101 for different age groups. This has relevance for future diabetes prevalence rates and shows that along with many countries worldwide Australia is facing an increasing pandemic. In contrast Japan is expected to have a decrease in the next twenty years in the number of persons with diabetes.

Keywords: Epidemiological methods, aging, prevalence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1918
42 The Shifting Urban Role of Buildings’ Facades: A Diachronic Analysis of El Korba

Authors: Virginia Bassily, Sherif Goubran

Abstract:

In heritage conservation and revival, much of the focus is placed on the techniques and methods to preserve, restore, and revive heritage structures and locations. However, more attention needs to be drawn to how deterioration happens and its effect on the area’s character and socio-economic status. To this end, this research aims to examine the decline and its effect in the El Korba area in Heliopolis, Cairo, Egypt. El Korba was designed with a unique architectural character to stimulate social and economic life. However, the area has been on a path of physical deterioration that is corroding the social life on its streets. This research uses diachronic analysis in Ibrahim El-Lakkani Boulevard of El Korba based on a previously developed framework that connects buildings’ architectural features to the degree of social interaction in the street to document the changes that the building deterioration could have caused. Architectural features of the street level during both the original state (1906) and the current state (2021) are broken down and categorized in those six parameters to understand their decline or improvement over time. We find that the parameters that have decreased over the years and caused the deterioration are complexity and architectural character, permeability, territoriality and personalization, and physical comfort.  Based on these findings, revival projects can focus on physical parameters that create synergistic benefits by preserving and renewing heritage locations and revitalizing their socio-economic potential.

Keywords: Architectural character, heritage building conservation, enclosure, ground-floor use, El Korba, visual and physical permeability, personalization, physical comfort, social life, territoriality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 400
41 Information Retrieval: Improving Question Answering Systems by Query Reformulation and Answer Validation

Authors: Mohammad Reza Kangavari, Samira Ghandchi, Manak Golpour

Abstract:

Question answering (QA) aims at retrieving precise information from a large collection of documents. Most of the Question Answering systems composed of three main modules: question processing, document processing and answer processing. Question processing module plays an important role in QA systems to reformulate questions. Moreover answer processing module is an emerging topic in QA systems, where these systems are often required to rank and validate candidate answers. These techniques aiming at finding short and precise answers are often based on the semantic relations and co-occurrence keywords. This paper discussed about a new model for question answering which improved two main modules, question processing and answer processing which both affect on the evaluation of the system operations. There are two important components which are the bases of the question processing. First component is question classification that specifies types of question and answer. Second one is reformulation which converts the user's question into an understandable question by QA system in a specific domain. The objective of an Answer Validation task is thus to judge the correctness of an answer returned by a QA system, according to the text snippet given to support it. For validating answers we apply candidate answer filtering, candidate answer ranking and also it has a final validation section by user voting. Also this paper described new architecture of question and answer processing modules with modeling, implementing and evaluating the system. The system differs from most question answering systems in its answer validation model. This module makes it more suitable to find exact answer. Results show that, from total 50 asked questions, evaluation of the model, show 92% improving the decision of the system.

Keywords: Answer processing, answer validation, classification, question answering, query reformulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2809
40 The Effects of an Immigration Policy on the Economic Integration of Migrants and on Natives’ Attitudes: The Case of Syrian Refugees in Turkey

Authors: S. Zeynep Siretioglu Girgin, Gizem Turna Cebeci

Abstract:

Turkey’s immigration policy is a controversial issue considering its legal, economic, social, and political and human rights dimensions. Formulation of an immigration policy goes hand in hand with political processes, where natives’ attitudes play a significant role. On the other hand, as was the case in Turkey, radical changes made in immigration policy or policies lacking transparency may cause severe reactions by the host society. The underlying discussion paper aims to analyze quantitatively the effects of the existing ‘open door’ immigration policy on the economic integration of Syrian refugees in Turkey, and on the perception of the native population of refugees. For the analysis, semi-structured in-depth interviews and focus group interviews have been conducted. After the introduction, a literature review is provided, followed by theoretical background on the explanation of natives’ attitudes towards immigrants. In the next section, a qualitative analysis of natives’ attitudes towards Syrian refugees is presented with the subtopics of (i) awareness, general opinions and expectations, (ii) open-door policy and management of the migration process, (iii) perception of positive and negative impacts of immigration, (iv) economic integration, and (v) cultural similarity. Results indicate that, natives concurrently have social, economic and security concerns regarding refugees, while difficulties regarding security and economic integration of refugees stand out. Socio-economic characteristics of the respondents, such as the educational level and employment status, are not sufficient to explain the overall attitudes towards refugees, while they can be used to explain the awareness of the respondents and the priority of the concerns felt.

Keywords: Economic integration, immigration policy, integration policies, migrants, natives’ attitudes, perception, Syrian refugees, Turkey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1216
39 Slaughter and Carcass Characterization, and Sensory Qualities of Native, Pure, and Upgraded Breeds of Goat Raised in the Philippines

Authors: Jonathan N. Nayga, Emelita B. Valdez, Mila R. Andres, Beulah B. Estrada, Emelina A. Lopez, Rogelio B. Tamayo, Aubrey Joy M. Balbin

Abstract:

Goat production is one of the activities included in integrated farming in the Philippines. Goats are raised for its meat and regardless of breed the animal is slaughtered for this purpose. In order to document the carcass yield of different goats slaughtered, five (5) different breeds of goats to include Purebred Boer and Anglo-nubian, Crossbred Boer and Anglo-nubian and Philippine Native goat were used in the study. Data on slaughter parameters, carcass characteristics, and sensory evaluation were gathered and analyzed using Complete Random Design (CRD) at 5% level of significance and the results of carcass conformation were assessed descriptively. Results showed that slaughter data such as slaughter/live weight, hot and chilled carcass weights, dressing percentage and percentage drip loss were significantly different (P>0.05) among breeds. On carcass and meat characteristics, pure breed and upgraded Boer were found to be moderately muscular while Native goat was rated as thin muscular. The color of the carcass also revealed that Purebred and crossbred Boer were described dark red, while Native goat was noted to be slightly pale. On sensory evaluation, the results indicated that there was no significant difference (P>0.05) among breeds evaluated. It is therefore concluded that purebred goat has heavier carcass, while both purebred Boer and upgrade are rated slightly muscular. It is further confirms that regardless of breed, goat will have the same sensory characteristics. Thus, it is recommended to slaughter heavier goats to obtain more carcasses with better conformation and quality.

Keywords: Carcass quality, goat, sensory evaluation, slaughter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1977
38 Capacity Building for Hazmat Transport Emergency Preparedness: 'Hotspot Impact Zone' Mapping from Flammable and Toxic Releases

Authors: U K Chakrabarti, Jigisha Parikh

Abstract:

Hazardous Material transportation by road is coupled with inherent risk of accidents causing loss of lives, grievous injuries, property losses and environmental damages. The most common type of hazmat road accident happens to be the releases (78%) of hazardous substances, followed by fires (28%), explosions (14%) and vapour/ gas clouds (6 %.). The paper is discussing initially the probable 'Impact Zones' likely to be caused by one flammable (LPG) and one toxic (ethylene oxide) chemicals being transported through a sizable segment of a State Highway connecting three notified Industrial zones in Surat district in Western India housing 26 MAH industrial units. Three 'hotspots' were identified along the highway segment depending on the particular chemical traffic and the population distribution within 500 meters on either sides. The thermal radiation and explosion overpressure have been calculated for LPG / Ethylene Oxide BLEVE scenarios along with toxic release scenario for ethylene oxide. Besides, the dispersion calculations for ethylene oxide toxic release have been made for each 'hotspot' location and the impact zones have been mapped for the LOC concentrations. Subsequently, the maximum Initial Isolation and the protective zones were calculated based on ERPG-3 and ERPG-2 values of ethylene oxide respectively which are estimated taking the worst case scenario under worst weather conditions. The data analysis will be helpful to the local administration in capacity building with respect to rescue / evacuation and medical preparedness and quantitative inputs to augment the District Offsite Emergency Plan document.

Keywords: Hotspot, Ethylene Oxide, LPG, MAH (MajorAccident Hazard).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1774
37 A Multivariate Statistical Approach for Water Quality Assessment of River Hindon, India

Authors: Nida Rizvi, Deeksha Katyal, Varun Joshi

Abstract:

River Hindon is an important river catering the demand of highly populated rural and industrial cluster of western Uttar Pradesh, India. Water quality of river Hindon is deteriorating at an alarming rate due to various industrial, municipal and agricultural activities. The present study aimed at identifying the pollution sources and quantifying the degree to which these sources are responsible for the deteriorating water quality of the river. Various water quality parameters, like pH, temperature, electrical conductivity, total dissolved solids, total hardness, calcium, chloride, nitrate, sulphate, biological oxygen demand, chemical oxygen demand, and total alkalinity were assessed. Water quality data obtained from eight study sites for one year has been subjected to the two multivariate techniques, namely, principal component analysis and cluster analysis. Principal component analysis was applied with the aim to find out spatial variability and to identify the sources responsible for the water quality of the river. Three Varifactors were obtained after varimax rotation of initial principal components using principal component analysis. Cluster analysis was carried out to classify sampling stations of certain similarity, which grouped eight different sites into two clusters. The study reveals that the anthropogenic influence (municipal, industrial, waste water and agricultural runoff) was the major source of river water pollution. Thus, this study illustrates the utility of multivariate statistical techniques for analysis and elucidation of multifaceted data sets, recognition of pollution sources/factors and understanding temporal/spatial variations in water quality for effective river water quality management.

Keywords: Cluster analysis, multivariate statistical technique, river Hindon, water Quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3768
36 Semantic Enhanced Social Media Sentiments for Stock Market Prediction

Authors: K. Nirmala Devi, V. Murali Bhaskaran

Abstract:

Traditional document representation for classification follows Bag of Words (BoW) approach to represent the term weights. The conventional method uses the Vector Space Model (VSM) to exploit the statistical information of terms in the documents and they fail to address the semantic information as well as order of the terms present in the documents. Although, the phrase based approach follows the order of the terms present in the documents rather than semantics behind the word. Therefore, a semantic concept based approach is used in this paper for enhancing the semantics by incorporating the ontology information. In this paper a novel method is proposed to forecast the intraday stock market price directional movement based on the sentiments from Twitter and money control news articles. The stock market forecasting is a very difficult and highly complicated task because it is affected by many factors such as economic conditions, political events and investor’s sentiment etc. The stock market series are generally dynamic, nonparametric, noisy and chaotic by nature. The sentiment analysis along with wisdom of crowds can automatically compute the collective intelligence of future performance in many areas like stock market, box office sales and election outcomes. The proposed method utilizes collective sentiments for stock market to predict the stock price directional movements. The collective sentiments in the above social media have powerful prediction on the stock price directional movements as up/down by using Granger Causality test.

Keywords: Bag of Words, Collective Sentiments, Ontology, Semantic relations, Sentiments, Social media, Stock Prediction, Twitter, Vector Space Model and wisdom of crowds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2764
35 Digital Content Strategy: Detailed Review of the Key Content Components

Authors: Oksana Razina, Shakeel Ahmad, Jessie Qun Ren, Olufemi Isiaq

Abstract:

The modern life of businesses is categorically reliant on their established position online, where digital (and particularly website) content plays a significant role as the first point of information. Digital content, therefore, becomes essential – from making the first impression through to the building and development of client relationships. Despite a number of valuable papers suggesting a strategic approach when dealing with digital data, other sources often do not view or accept the approach to digital content as a holistic or continuous process. Associations are frequently made with merely a one-off marketing campaign or similar. The challenge is in establishing an agreed definition for the notion of Digital Content Strategy (DCS), which currently does not exist, as it is viewed from an excessive number of angles. A strategic approach to content, nonetheless, is required, both practically and contextually. We, therefore, aimed at attempting to identify the key content components, comprising a DCS, to ensure all the aspects were covered and strategically applied – from the company’s understanding of the content value to the ability to display flexibility of content and advances in technology. This conceptual project evaluated existing literature on the topic of DCS and related aspects, using PRISMA Systematic Review Method, Document Analysis, Inclusion and Exclusion Criteria, Scoping Review, Snow-Balling Technique and Thematic Analysis. The data were collected from academic and statistical sources, government and relevant trade publications. Based on the suggestions from academics and trading sources, related to the issues discussed, we revealed the key actions for content creation and attempted to define the notion of DCS. The major finding of the study presented Key Content Components of DCS and can be considered for implementation in a business retail setting.

Keywords: Digital content strategy, digital marketing strategy, key content components, websites.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 141
34 Digital Content Strategy: Detailed Review of the Key Content Components

Authors: Oksana Razina, Shakeel Ahmad, Jessie Qun Ren, Olufemi Isiaq

Abstract:

The modern life of businesses is categorically reliant on their established position online, where digital (and particularly website) content plays a significant role as the first point of information. Digital content, therefore, becomes essential – from making the first impression through to the building and development of client relationships. Despite a number of valuable papers suggesting a strategic approach when dealing with digital data, other sources often do not view or accept the approach to digital content as a holistic or continuous process. Associations are frequently made with merely a one-off marketing campaign or similar. The challenge is in establishing an agreed definition for the notion of Digital Content Strategy (DCS), which currently does not exist, as it is viewed from an excessive number of angles. A strategic approach to content, nonetheless, is required, both practically and contextually. We, therefore, aimed at attempting to identify the key content components, comprising a DCS, to ensure all the aspects were covered and strategically applied – from the company’s understanding of the content value to the ability to display flexibility of content and advances in technology. This conceptual project evaluated existing literature on the topic of DCS and related aspects, using PRISMA Systematic Review Method, Document Analysis, Inclusion and Exclusion Criteria, Scoping Review, Snow-Balling Technique and Thematic Analysis. The data were collected from academic and statistical sources, government and relevant trade publications. Based on the suggestions from academics and trading sources, related to the issues discussed, we revealed the key actions for content creation and attempted to define the notion of DCS. The major finding of the study presented Key Content Components of DCS and can be considered for implementation in a business retail setting.

Keywords: Digital content strategy, digital marketing strategy, key content components, websites.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 115
33 Analysis of Noise Level Effects on Signal-Averaged Electrocardiograms

Authors: Chun-Cheng Lin

Abstract:

Noise level has critical effects on the diagnostic performance of signal-averaged electrocardiogram (SAECG), because the true starting and end points of QRS complex would be masked by the residual noise and sensitive to the noise level. Several studies and commercial machines have used a fixed number of heart beats (typically between 200 to 600 beats) or set a predefined noise level (typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform SAECG analysis. However different criteria or methods used to perform SAECG would cause the discrepancies of the noise levels among study subjects. According to the recommendations of 1991 ESC, AHA and ACC Task Force Consensus Document for the use of SAECG, the determinations of onset and offset are related closely to the mean and standard deviation of noise sample. Hence this study would try to perform SAECG using consistent root-mean-square (RMS) noise levels among study subjects and analyze the noise level effects on SAECG. This study would also evaluate the differences between normal subjects and chronic renal failure (CRF) patients in the time-domain SAECG parameters. The study subjects were composed of 50 normal Taiwanese and 20 CRF patients. During the signal-averaged processing, different RMS noise levels were adjusted to evaluate their effects on three time domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS voltage of the last QRS 40 ms (RMS40), and (3) duration of the low amplitude signals below 40 μV (LAS40). The study results demonstrated that the reduction of RMS noise level can increase fQRSD and LAS40 and decrease the RMS40, and can further increase the differences of fQRSD and RMS40 between normal subjects and CRF patients. The SAECG may also become abnormal due to the reduction of RMS noise level. In conclusion, it is essential to establish diagnostic criteria of SAECG using consistent RMS noise levels for the reduction of the noise level effects.

Keywords: Signal-averaged electrocardiogram, Ventricular latepotentials, Chronic renal failure, Noise level effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1768
32 Sedimentary Response to Coastal Defense Works in São Vicente Bay, São Paulo

Authors: L. C. Ansanelli, P. Alfredini

Abstract:

The article presents the evaluation of the effectiveness of two groins located at Gonzaguinha and Milionários Beaches, situated on the southeast coast of Brazil. The effectiveness of these coastal defense structures is evaluated in terms of sedimentary dynamics, which is one of the most important environmental processes to be assessed in coastal engineering studies. The applied method is based on the implementation of the Delft3D numerical model system tools. Delft3D-WAVE module was used for waves modelling, Delft3D-FLOW for hydrodynamic modelling and Delft3D-SED for sediment transport modelling. The calibration of the models was carried out in a way that the simulations adequately represent the region studied, evaluating improvements in the model elements with the use of statistical comparisons of similarity between the results and waves, currents and tides data recorded in the study area. Analysis of the maximum wave heights was carried to select the months with higher accumulated energy to implement these conditions in the engineering scenarios. The engineering studies were performed for two scenarios: 1) numerical simulation of the area considering only the two existing groins; 2) conception of breakwaters coupled at the ends of the existing groins, resulting in two “T” shaped structures. The sediment model showed that, for the simulated period, the area is affected by erosive processes and that the existing groins have little effectiveness in defending the coast in question. The implemented T structures showed some effectiveness in protecting the beaches against erosion and provided the recovery of the portion directly covered by it on the Milionários Beach. In order to complement this study, it is suggested the conception of further engineering scenarios that might recover other areas of the studied region.

Keywords: Coastal engineering, coastal erosion, Sao Vicente Bay, Delft3D, coastal engineering works.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 695
31 A Secure Auditing Framework for Load Balancing in Cloud Environment

Authors: R. Geetha, T. Padmavathy

Abstract:

Security audit is an important aspect or feature to be considered in cloud service customer. It is basically a certification process to audit the controls that deliver the security requirements. Security audits are conducted by trained and qualified staffs that belong to an independent auditing organization. Security audits must be carried as a standard of security controls. Proper check to be made that the cloud user has a proper reporting and logging facilities with the customer's system and hence ensuring appropriate business and operational flow of data through cloud service. We propose a cloud-based secure auditing framework, which enables confided in power to safely store their mystery information on the semi-believed cloud specialist co-ops, and specifically share their mystery information with a wide scope of information recipient, to diminish the key administration intricacy for power proprietors and information collectors. Unique in relation to past cloud-based information framework, data proprietors transfer their mystery information into cloud utilizing static and dynamic evaluating plan. Another propelled determination is, if any information beneficiary needs individual record to download, the information collector will send the solicitation to the expert. The specialist proprietor has the Access Control. At the off probability, the businessman must impart the primary record to the knowledge collector, acknowledge statistics beneficiary solicitation. Once the acknowledgement for the records is over, the recipient downloads the first record and this record shifting time with date and downloading time with date are monitored by the inspector. In addition to deduplication concept, diminished cloud memory area using dynamic document distribution has been proposed.

Keywords: Cloud computing, cloud storage auditing, data integrity, key exposure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1118
30 DNA Polymorphism Studies of β-Lactoglobulin Gene in Saudi Goats

Authors: Amr A. El Hanafy, Muhammad Qureshi, Jamal Sabir, Mohamed Mutawakil, Mohamed M. Ahmed, Hassan El Ashmaoui, Hassan Ramadan, Mohamed Abou-Alsoud, Mahmoud Abdel Sadek

Abstract:

Domestic goats (Capra hircus) are extremely diverse species and principal animal genetic resource of the developing world. These facilitate a persistent supply of meat, milk, fibre, and skin and are considered as important revenue generators in small pastoral environments. This study aimed to fingerprint β-LG gene at PCR-RFLP level in native Saudi goat breeds (Ardi, Habsi and Harri) in an attempt to have a preliminary image of β-LG genotypic patterns in Saudi breeds as compared to other foreign breeds such as Indian and Egyptian. Also, the Phylogenetic analysis was done to investigate evolutionary trends and similarities among the caprine β-LG gene with that of the other domestic specie, viz. cow, buffalo and sheep. Blood samples were collected from 300 animals (100 for each breed) and genomic DNA was extracted. A fragment of the β-LG gene (427bp) was amplified using specific primers. Subsequent digestion with Sac II restriction endonuclease revealed two alleles (A and B) and three different banding patterns or genotypes i.e. AA, AB and BB. The statistical analysis showed a general trend that β-LG AA genotype had higher milk yield than β-LG AB and β-LG BB genotypes. Nucleotide sequencing of the selected β-LG fragments was done and submitted to GenBank NCBI (Accession No. KJ544248, KJ588275, KJ588276, KJ783455, KJ783456 and KJ874959). Phylogenetic analysis on the basis of nucleotide sequences of native Saudi goats indicated evolutional similarity with the GenBank reference sequences of goat, Bubalus bubalis and Bos taurus. However, the origin of sheep which is the most closely related from the evolutionary point of view, was located some distance away.

Keywords: β-Lactoglobulin, Saudi goats, PCR-RFLP, Phylogenetic analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6109
29 Suitability of Entry into the Euro Area: An Excursion in Selected Economies

Authors: Ludk Benada, Jindika Sedova

Abstract:

The current situation in the eurozone raises a number of topics for discussion and to help in finding an answer to the question of whether a common currency is a more suitable means of coping with the impact of the financial crisis or whether national currencies are better suited to this. The economic situation in the EU is now considerably volatile and, due to problems with the fulfilment of the Maastricht convergence criteria, it is now being considered whether, in their further development, new member states will decide to distance themselves from the euro or will, in an attempt to overcome the crisis, speed up the adoption of the euro. The Czech Republic is one country with little interest in adopting the euro, justified by the fact that a better alternative to dealing with this crisis is an independent monetary policy and its ability to respond flexibly to the economic situation not only in Europe, but around the world. One attribute of the crisis in the Czech Republic and its mitigation is the freely floating exchange rate of the national currency. It is not only the Czech Republic that is attempting to alleviate the impact of the crisis, but also new EU member countries facing fresh questions to which theory have yet to provide wholly satisfactory answers. These questions undoubtedly include the problem of inflation targeting and the choice of appropriate instruments for achieving financial stability. The difficulty lies in the fact that these objectives may be contradictory and may require more than one means of achieving them. In this respect we may assume that membership of the euro zone might not in itself mitigate the development of the recession or protect the nation from future crises. We are of the opinion that the decisive factor in the development of any economy will continue to be the domestic economic policy and the operability of market economic mechanisms. We attempt to document this fact using selected countries as examples, these being the Czech Republic, Poland, Hungary, and Slovakia.

Keywords: Currency exchange rate, Maastricht convergence criteria, monetary union, public finances.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651
28 Parameters Influencing Human-Machine Interaction in Hospitals

Authors: Hind Bouami, Patrick Millot

Abstract:

Handling life-critical systems complexity requires to be equipped with appropriate technology and the right human agents’ functions such as knowledge, experience, and competence in problem’s prevention and solving. Human agents are involved in the management and control of human-machine system’s performance. Documenting human agent’s situation awareness is crucial to support human-machine designers’ decision-making. Knowledge about risks, critical parameters and factors that can impact and threaten automation system’s performance should be collected using preventive and retrospective approaches. This paper aims to document operators’ situation awareness through the analysis of automated organizations’ feedback. The analysis of automated hospital pharmacies feedback helps identify and control critical parameters influencing human machine interaction in order to enhance system’s performance and security. Our human machine system evaluation approach has been deployed in Macon hospital center’s pharmacy which is equipped with automated drug dispensing systems since 2015. Automation’s specifications are related to technical aspects, human-machine interaction, and human aspects. The evaluation of drug delivery automation performance in Macon hospital center has shown that the performance of the automated activity depends on the performance of the automated solution chosen, and also on the control of systemic factors. In fact, 80.95% of automation specification related to the chosen Sinteco’s automated solution is met. The performance of the chosen automated solution is involved in 28.38% of automation specifications performance in Macon hospital center. The remaining systemic parameters involved in automation specifications performance need to be controlled. 

Keywords: Life-critical systems, situation awareness, human-machine interaction, decision-making.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 511