Search results for: information value method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27219

Search results for: information value method

25029 Digital Watermarking Using Fractional Transform and (k,n) Halftone Visual Cryptography (HVC)

Authors: R. Rama Kishore, Sunesh Malik

Abstract:

Development in the usage of internet for different purposes in recent times creates great threat for the copy right protection of the digital images. Digital watermarking is the best way to rescue from the said problem. This paper presents detailed review of the different watermarking techniques, latest trends in the field and categorized like spatial and transform domain, blind and non-blind methods, visible and non visible techniques etc. It also discusses the different optimization techniques used in the field of watermarking in order to improve the robustness and imperceptibility of the method. Different measures are discussed to evaluate the performance of the watermarking algorithm. At the end, this paper proposes a watermarking algorithm using (k.n) shares of halftone visual cryptography (HVC) instead of (2, 2) share cryptography. (k,n) shares visual cryptography improves the security of the watermark. As halftone is a method of reprographic, it helps in improving the visual quality of watermark image. The proposed method uses fractional transformation to improve the robustness of the copyright protection of the method.

Keywords: digital watermarking, fractional transform, halftone, visual cryptography

Procedia PDF Downloads 355
25028 Holy Quran’s Hermeneutics from Self-Referentiality to the Quran by Quran’s Interpretation

Authors: Mohammad Ba’azm

Abstract:

The self-referentiality method as the missing ring of the Qur’an by Qur’an’s interpretation has a precise application at the level of the Quranic vocabulary, but after entering the domain of the verses, chapters and the whole Qur’an, it reveals its defect. Self-referentiality cannot show the clear concept of the Quranic scriptures, unlike the Qur’an by Qur’an’s interpretation method that guides us to the comprehension and exact hermeneutics. The Qur’an by Qur’an’s interpretation is a solid way of comprehension of the verses of the Qur'an and does not use external resources to provide implications and meanings with different theoretical and practical supports. In this method, theoretical supports are based on the basics and modalities that support and validate the legitimacy and validity of the interpretive method discussed, and the practical supports also relate to the practitioners of the religious elite. The combination of these two methods illustrates the exact understanding of the Qur'an at the level of Quranic verses, chapters, and the whole Qur’an. This study by examining the word 'book' in the Qur'an shows the difference between the two methods, and the necessity of attachment of these, in order to attain a desirable level for comprehensions meaning of the Qur'an. In this article, we have proven that by aspects of the meaning of the Quranic words, we cannot say any word has an exact meaning.

Keywords: Qur’an’s hermeneutic, self-referentiality, The Qur’an by Qur’an’s Interpretation, polysemy

Procedia PDF Downloads 188
25027 An Automatic Bayesian Classification System for File Format Selection

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for the classification of an unstructured format description for identification of file formats. The main contribution of this work is the employment of data mining techniques to support file format selection with just the unstructured text description that comprises the most important format features for a particular organisation. Subsequently, the file format indentification method employs file format classifier and associated configurations to support digital preservation experts with an estimation of required file format. Our goal is to make use of a format specification knowledge base aggregated from a different Web sources in order to select file format for a particular institution. Using the naive Bayes method, the decision support system recommends to an expert, the file format for his institution. The proposed methods facilitate the selection of file format and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and specifications of file formats. To facilitate decision-making, the aggregated information about the file formats is presented as a file format vocabulary that comprises most common terms that are characteristic for all researched formats. The goal is to suggest a particular file format based on this vocabulary for analysis by an expert. The sample file format calculation and the calculation results including probabilities are presented in the evaluation section.

Keywords: data mining, digital libraries, digital preservation, file format

Procedia PDF Downloads 499
25026 A Comparative Study between FEM and Meshless Methods

Authors: Jay N. Vyas, Sachin Daxini

Abstract:

Numerical simulation techniques are widely used now in product development and testing instead of expensive, time-consuming and sometimes dangerous laboratory experiments. Numerous numerical methods are available for performing simulation of physical problems of different engineering fields. Grid based methods, like Finite Element Method, are extensively used in performing various kinds of static, dynamic, structural and non-structural analysis during product development phase. Drawbacks of grid based methods in terms of discontinuous secondary field variable, dealing fracture mechanics and large deformation problems led to development of a relatively a new class of numerical simulation techniques in last few years, which are popular as Meshless methods or Meshfree Methods. Meshless Methods are expected to be more adaptive and flexible than Finite Element Method because domain descretization in Meshless Method requires only nodes. Present paper introduces Meshless Methods and differentiates it with Finite Element Method in terms of following aspects: Shape functions used, role of weight function, techniques to impose essential boundary conditions, integration techniques for discrete system equations, convergence rate, accuracy of solution and computational effort. Capabilities, benefits and limitations of Meshless Methods are discussed and concluded at the end of paper.

Keywords: numerical simulation, Grid-based methods, Finite Element Method, Meshless Methods

Procedia PDF Downloads 389
25025 Comparative Morphometric Analysis of Ambardi and Mangari Watersheds of Kadvi and Kasari River Sub-Basins in Kolhapur District, Maharashtra, India: Using Geographical Information System (GIS)

Authors: Chandrakant Gurav, Md. Babar

Abstract:

In the present study, an attempt is made to delineate the comparative morphometric analysis of Ambardi and Mangari watersheds of Kadvi and Kasari rivers sub-basins, Kolhapur District, Maharashtra India, using Geographical Information System (GIS) techniques. GIS is a computer assisted information method to store, analyze and display spatial data. Both the watersheds originate from Masai plateau of Jotiba- Panhala Hill range in Panhala Taluka of Kolhapur district. Ambardi watersheds cover 42.31 Sq. km. area and occur in northern hill slope, whereas Mangari watershed covers 54.63 Sq. km. area and occur on southern hill slope. Geologically, the entire study area is covered by Deccan Basaltic Province (DBP) of late Cretaceous to early Eocene age. Laterites belonging to late Pleistocene age also occur in the top of the hills. The objective of the present study is to carry out the morphometric parameters of watersheds, which occurs in differing slopes of the hill. Morphometric analysis of Ambardi watershed indicates it is of 4th order stream and Mangari watershed is of 5th order stream. Average bifurcation ratio of both watersheds is 5.4 and 4.0 showing that in both the watersheds streams flow from homogeneous nature of lithology and there is no structural controlled in development of the watersheds. Drainage density of Ambardi and Mangari watersheds is 3.45 km/km2 and 3.81 km/km2 respectively, and Stream Frequency is 4.51 streams/ km2 and 5.97 streams/ km2, it indicates that high drainage density and high stream frequency is governed by steep slope and low infiltration rate of the area for groundwater recharge. Textural ratio of both the watersheds is 6.6 km-1 and 9.6 km-1, which indicates that the drainage texture is fine to very fine. Form factor, circularity ratio and elongation ratios of the Ambardi and Mangari watersheds shows that both the watersheds are elongated in shape. The basin relief of Ambardi watershed is 447 m, while Mangari is 456 m. Relief ratio of Ambardi is 0.0428 and Mangari is 0.040. The ruggedness number of Ambardi is 1.542 and Mangari watershed is 1.737. The ruggedness number of both the watersheds is high which indicates the relief and drainage density is high.

Keywords: Ambardi, Deccan basalt, GIS, morphometry, Mangari, watershed

Procedia PDF Downloads 301
25024 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions

Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams

Abstract:

The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.

Keywords: architecture, central pavilions, classicism, machine learning

Procedia PDF Downloads 140
25023 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing

Authors: S. Bouhouche, R. Drai, J. Bast

Abstract:

This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.

Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement

Procedia PDF Downloads 283
25022 Implementation of Cloud Customer Relationship Management in Banking Sector: Strategies, Benefits and Challenges

Authors: Ngoc Dang Khoa Nguyen, Imran Ali

Abstract:

The cloud customer relationship management (CRM) has emerged as an innovative tool to augment the customer satisfaction and performance of banking systems. Cloud CRM allows to collect, analyze and utilize customer-associated information and update the systems, thereby offer superior customer service. Cloud technologies have invaluable potential to ensure innovative customer experiences, successful collaboration, enhanced speed to marketplace and IT effectiveness. As such, many leading banks have been attracted towards adoption of such innovative and customer-driver solutions to revolutionize their existing business models. Chief Information Officers (CIOs) are already implemented or in the process of implementation of cloud CRM. However, many organizations are still reluctant to take such initiative due to the lack of information on the factors influencing its implementation. This paper, therefore, aims to delve into the strategies, benefits and challenges intertwined in the implementation of Cloud CRM in banking sector and provide reliable solutions.

Keywords: banking sector, cloud computing, cloud CRM, strategy

Procedia PDF Downloads 166
25021 The Solution of Nonlinear Partial Differential Equation for The Phenomenon of Instability in Homogeneous Porous Media by Homotopy Analysis Method

Authors: Kajal K. Patel, M. N. Mehta, T. R. Singh

Abstract:

When water is injected in oil formatted area in secondary oil recovery process the instability occurs near common interface due to viscosity difference of injected water and native oil. The governing equation gives rise to the non-linear partial differential equation and its solution has been obtained by Homotopy analysis method with appropriate guess value of the solution together with some conditions and standard relations. The solution gives the average cross-sectional area occupied by the schematic fingers during the occurs of instability phenomenon. The numerical and graphical presentation has developed by using Maple software.

Keywords: capillary pressure, homotopy analysis method, instability phenomenon, viscosity

Procedia PDF Downloads 496
25020 Numerical Solutions of an Option Pricing Rainfall Derivatives Model

Authors: Clarinda Vitorino Nhangumbe, Ercília Sousa

Abstract:

Weather derivatives are financial products used to cover non catastrophic weather events with a weather index as the underlying asset. The rainfall weather derivative pricing model is modeled based in the assumption that the rainfall dynamics follows Ornstein-Uhlenbeck process, and the partial differential equation approach is used to derive the convection-diffusion two dimensional time dependent partial differential equation, where the spatial variables are the rainfall index and rainfall depth. To compute the approximation solutions of the partial differential equation, the appropriate boundary conditions are suggested, and an explicit numerical method is proposed in order to deal efficiently with the different choices of the coefficients involved in the equation. Being an explicit numerical method, it will be conditionally stable, then the stability region of the numerical method and the order of convergence are discussed. The model is tested for real precipitation data.

Keywords: finite differences method, ornstein-uhlenbeck process, partial differential equations approach, rainfall derivatives

Procedia PDF Downloads 105
25019 Earphone Style Wearable Device for Automatic Guidance Service with Position Sensing

Authors: Dawei Cai

Abstract:

This paper describes a design of earphone style wearable device that may provide an automatic guidance service for visitors. With both position information and orientation information obtained from NFC and terrestrial magnetism sensor, a high level automatic guide service may be realized. To realize the service, a algorithm for position detection using the packet from NFC tags, and developed an algorithm to calculate the device orientation based on the data from acceleration and terrestrial magnetism sensors called as MEMS. If visitors want to know some explanation about an exhibit in front of him, what he has to do is only move to the object and stands for a moment. The identification program will automatically recognize the status based on the information from NFC and MEMS, and start playing explanation content about the exhibit. This service should be useful for improving the understanding of the exhibition items and bring more satisfactory visiting experience without less burden.

Keywords: wearable device, MEMS sensor, ubiquitous computing, NFC

Procedia PDF Downloads 239
25018 Development of a Psychometric Testing Instrument Using Algorithms and Combinatorics to Yield Coupled Parameters and Multiple Geometric Arrays in Large Information Grids

Authors: Laith F. Gulli, Nicole M. Mallory

Abstract:

The undertaking to develop a psychometric instrument is monumental. Understanding the relationship between variables and events is important in structural and exploratory design of psychometric instruments. Considering this, we describe a method used to group, pair and combine multiple Philosophical Assumption statements that assisted in development of a 13 item psychometric screening instrument. We abbreviated our Philosophical Assumptions (PA)s and added parameters, which were then condensed and mathematically modeled in a specific process. This model produced clusters of combinatorics which was utilized in design and development for 1) information retrieval and categorization 2) item development and 3) estimation of interactions among variables and likelihood of events. The psychometric screening instrument measured Knowledge, Assessment (education) and Beliefs (KAB) of New Addictions Research (NAR), which we called KABNAR. We obtained an overall internal consistency for the seven Likert belief items as measured by Cronbach’s α of .81 in the final study of 40 Clinicians, calculated by SPSS 14.0.1 for Windows. We constructed the instrument to begin with demographic items (degree/addictions certifications) for identification of target populations that practiced within Outpatient Substance Abuse Counseling (OSAC) settings. We then devised education items, beliefs items (seven items) and a modifiable “barrier from learning” item that consisted of six “choose any” choices. We also conceptualized a close relationship between identifying various degrees and certifications held by Outpatient Substance Abuse Therapists (OSAT) (the demographics domain) and all aspects of their education related to EB-NAR (past and present education and desired future training). We placed a descriptive (PA)1tx in both demographic and education domains to trace relationships of therapist education within these two domains. The two perceptions domains B1/b1 and B2/b2 represented different but interrelated perceptions from the therapist perspective. The belief items measured therapist perceptions concerning EB-NAR and therapist perceptions using EB-NAR during the beginning of outpatient addictions counseling. The (PA)s were written in simple words and descriptively accurate and concise. We then devised a list of parameters and appropriately matched them to each PA and devised descriptive parametric (PA)s in a domain categorized information grid. Descriptive parametric (PA)s were reduced to simple mathematical symbols. This made it easy to utilize parametric (PA)s into algorithms, combinatorics and clusters to develop larger information grids. By using matching combinatorics we took paired demographic and education domains with a subscript of 1 and matched them to the column with each B domain with subscript 1. Our algorithmic matching formed larger information grids with organized clusters in columns and rows. We repeated the process using different demographic, education and belief domains and devised multiple information grids with different parametric clusters and geometric arrays. We found benefit combining clusters by different geometric arrays, which enabled us to trace parametric variables and concepts. We were able to understand potential differences between dependent and independent variables and trace relationships of maximum likelihoods.

Keywords: psychometric, parametric, domains, grids, therapists

Procedia PDF Downloads 278
25017 Error Amount in Viscoelasticity Analysis Depending on Time Step Size and Method used in ANSYS

Authors: A. Fettahoglu

Abstract:

Theory of viscoelasticity is used by many researchers to represent behavior of many materials such as pavements on roads or bridges. Several researches used analytical methods and rheology to predict the material behaviors of simple models. Today, more complex engineering structures are analyzed using Finite Element Method, in which material behavior is embedded by means of three dimensional viscoelastic material laws. As a result, structures of unordinary geometry and domain like pavements of bridges can be analyzed by means of Finite Element Method and three dimensional viscoelastic equations. In the scope of this study, rheological models embedded in ANSYS, namely, generalized Maxwell elements and Prony series, which are two methods used by ANSYS to represent viscoelastic material behavior, are presented explicitly. Subsequently, a practical problem, which has an analytical solution given in literature, is used to verify the applicability of viscoelasticity tool embedded in ANSYS. Finally, amount of error in the results of ANSYS is compared with the analytical results to indicate the influence of used method and time step size.

Keywords: generalized Maxwell model, finite element method, prony series, time step size, viscoelasticity

Procedia PDF Downloads 369
25016 Accessible Mobile Augmented Reality App for Art Social Learning Based on Technology Acceptance Model

Authors: Covadonga Rodrigo, Felipe Alvarez Arrieta, Ana Garcia Serrano

Abstract:

Mobile augmented reality technologies have become very popular in the last years in the educational field. Researchers have studied how these technologies improve the engagement of the student and better understanding of the process of learning. But few studies have been made regarding the accessibility of these new technologies applied to digital humanities. The goal of our research is to develop an accessible mobile application with embedded augmented reality main characters of the art work and gamification events accompanied by multi-sensorial activities. The mobile app conducts a learning itinerary around the artistic work, driving the user experience in and out the museum. The learning design follows the inquiry-based methodology and social learning conducted through interaction with social networks. As for the software application, it’s being user-centered designed, following the universal design for learning (UDL) principles to assure the best level of accessibility for all. The mobile augmented reality application starts recognizing a marker from a masterpiece of a museum using the camera of the mobile device. The augmented reality information (history, author, 3D images, audio, quizzes) is shown through virtual main characters that come out from the art work. To comply with the UDL principles, we use a version of the technology acceptance model (TAM) to study the easiness of use and perception of usefulness, extended by the authors with specific indicators for measuring accessibility issues. Following a rapid prototype method for development, the first app has been recently produced, fulfilling the EN 301549 standard and W3C accessibility guidelines for mobile development. A TAM-based web questionnaire with 214 participants with different kinds of disabilities was previously conducted to gather information and feedback on user preferences from the artistic work on the Museo del Prado, the level of acceptance of technology innovations and the easiness of use of mobile elements. Preliminary results show that people with disabilities felt very comfortable while using mobile apps and internet connection. The augmented reality elements seem to offer an added value highly engaging and motivating for the students.

Keywords: H.5.1 (multimedia information systems), artificial, augmented and virtual realities, evaluation/methodology

Procedia PDF Downloads 135
25015 Reduction of False Positives in Head-Shoulder Detection Based on Multi-Part Color Segmentation

Authors: Lae-Jeong Park

Abstract:

The paper presents a method that utilizes figure-ground color segmentation to extract effective global feature in terms of false positive reduction in the head-shoulder detection. Conventional detectors that rely on local features such as HOG due to real-time operation suffer from false positives. Color cue in an input image provides salient information on a global characteristic which is necessary to alleviate the false positives of the local feature based detectors. An effective approach that uses figure-ground color segmentation has been presented in an effort to reduce the false positives in object detection. In this paper, an extended version of the approach is presented that adopts separate multipart foregrounds instead of a single prior foreground and performs the figure-ground color segmentation with each of the foregrounds. The multipart foregrounds include the parts of the head-shoulder shape and additional auxiliary foregrounds being optimized by a search algorithm. A classifier is constructed with the feature that consists of a set of the multiple resulting segmentations. Experimental results show that the presented method can discriminate more false positive than the single prior shape-based classifier as well as detectors with the local features. The improvement is possible because the presented approach can reduce the false positives that have the same colors in the head and shoulder foregrounds.

Keywords: pedestrian detection, color segmentation, false positive, feature extraction

Procedia PDF Downloads 281
25014 The Forensic Analysis of Engravers' Handwriting

Authors: Olivia Rybak-Karkosz

Abstract:

The purpose of this paper is to present the result of scientific research using forensic handwriting analysis. It was conducted to verify the stability and lability of handwriting of engravers and check if gravers transfer their traits from handwriting to plates and other surfaces they rework. This research methodology consisted of completing representative samples of signatures of gravers written on a piece of paper using a ballpen and signatures engraved on other surfaces. The forensic handwriting analysis was conducted using the graphic-comparative method (graphic method), and all traits were analysed. The paper contains a concluding statement of the similarities and differences between the samples.

Keywords: artist’s signatures, engraving, forensic handwriting analysis, graphic-comparative method

Procedia PDF Downloads 102
25013 Geo Spatial Database for Railway Assets Management

Authors: Muhammad Umar

Abstract:

Safety and Assets management is considering a backbone of every department. GIS in the Railway become very important to Manage Assets and Security through Digital Maps and Web based GIS Maps. It provides a complete frame of work to the organization for the management of assets. Pakistan Railway is the most common and safest mode of traveling in Pakistan. Due to ever-increasing demand of transporting huge amount of information generated from various sources and this information must be accurate. This creates problems for Passengers and Administration that causes finical and time loss. GIS Solve this problem by Digital Maps & Database. It provides you a real time Spatial and Statistical analysis that helps you to communicate and exchange the information in a sophisticated way to the users. GIS Based Web system provides a facility to different end user to make query at a time as per requirements. This GIS System provides an advancement in an organization for a complete Monitoring, Safety and Decision System for tracks, Stations and Junctions that further use for the Analysis of different areas i.e. analysis of tracks, junctions and Stations in case of reconstruction, Rescue for rail accidents and Natural disasters .This Research work helps to reduce the financial loss and reduce human mistakes helps you provide a complete security and Management system of assets.

Keywords: Geographical Information System (GIS) for assets management, geo spatial database, railway assets management, Pakistan

Procedia PDF Downloads 491
25012 An Investigation on Opportunities and Obstacles on Implementation of Building Information Modelling for Pre-fabrication in Small and Medium Sized Construction Companies in Germany: A Practical Approach

Authors: Nijanthan Mohan, Rolf Gross, Fabian Theis

Abstract:

The conventional method used in the construction industries often resulted in significant rework since most of the decisions were taken onsite under the pressure of project deadlines and also due to the improper information flow, which results in ineffective coordination. However, today’s architecture, engineering, and construction (AEC) stakeholders demand faster and accurate deliverables, efficient buildings, and smart processes, which turns out to be a tall order. Hence, the building information modelling (BIM) concept was developed as a solution to fulfill the above-mentioned necessities. Even though BIM is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. Due to the huge capital requirement, the small and medium-sized construction companies are still reluctant to implement BIM workflow in their projects. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, pre-fabrication is chosen for this paper because it plays a vital role in creating an impact on time as well as cost factors of a construction project. The positive impact of prefabrication can be explicitly observed by the project stakeholders and participants, which enables the breakthrough of the skepticism factor among the small scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction, followed by a practical approach, which was executed with two case studies. The first case study represents on-site prefabrication, and the second was done for off-site prefabrication. It was planned in such a way that the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the cost and time analysis was made, and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal or no wastes, better accuracy, less problem-solving at the construction site. It is also observed that this process requires more planning time, better communication, and coordination between different disciplines such as mechanical, electrical, plumbing, architecture, etc., which was the major obstacle for successful implementation. This paper was carried out in the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.

Keywords: building information modelling, construction wastes, pre-fabrication, small and medium sized company

Procedia PDF Downloads 113
25011 Distributed Listening in Intensive Care: Nurses’ Collective Alarm Responses Unravelled through Auditory Spatiotemporal Trajectories

Authors: Michael Sonne Kristensen, Frank Loesche, James Foster, Elif Ozcan, Judy Edworthy

Abstract:

Auditory alarms play an integral role in intensive care nurses’ daily work. Most medical devices in the intensive care unit (ICU) are designed to produce alarm sounds in order to make nurses aware of immediate or prospective safety risks. The utilisation of sound as a carrier of crucial patient information is highly dependent on nurses’ presence - both physically and mentally. For ICU nurses, especially the ones who work with stationary alarm devices at the patient bed space, it is a challenge to display ‘appropriate’ alarm responses at all times as they have to navigate with great flexibility in a complex work environment. While being primarily responsible for a small number of allocated patients they are often required to engage with other nurses’ patients, relatives, and colleagues at different locations inside and outside the unit. This work explores the social strategies used by a team of nurses to comprehend and react to the information conveyed by the alarms in the ICU. Two main research questions guide the study: To what extent do alarms from a patient bed space reach the relevant responsible nurse by direct auditory exposure? By which means do responsible nurses get informed about their patients’ alarms when not directly exposed to the alarms? A comprehensive video-ethnographic field study was carried out to capture and evaluate alarm-related events in an ICU. The study involved close collaboration with four nurses who wore eye-level cameras and ear-level binaural audio recorders during several work shifts. At all time the entire unit was monitored by multiple video and audio recorders. From a data set of hundreds of hours of recorded material information about the nurses’ location, social interaction, and alarm exposure at any point in time was coded in a multi-channel replay-interface. The data shows that responsible nurses’ direct exposure and awareness of the alarms of their allocated patients vary significantly depending on work load, social relationships, and the location of the patient’s bed space. Distributed listening is deliberately employed by the nursing team as a social strategy to respond adequately to alarms, but the patterns of information flow prompted by alarm-related events are not uniform. Auditory Spatiotemporal Trajectory (AST) is proposed as a methodological label to designate the integration of temporal, spatial and auditory load information. As a mixed-method metrics it provides tangible evidence of how nurses’ individual alarm-related experiences differ from one another and from stationary points in the ICU. Furthermore, it is used to demonstrate how alarm-related information reaches the individual nurse through principles of social and distributed cognition, and how that information relates to the actual alarm event. Thereby it bridges a long-standing gap in the literature on medical alarm utilisation between, on the one hand, initiatives to measure objective data of the medical sound environment without consideration for any human experience, and, on the other hand, initiatives to study subjective experiences of the medical sound environment without detailed evidence of the objective characteristics of the environment.

Keywords: auditory spatiotemporal trajectory, medical alarms, social cognition, video-ethography

Procedia PDF Downloads 190
25010 A Geometric Based Hybrid Approach for Facial Feature Localization

Authors: Priya Saha, Sourav Dey Roy Jr., Debotosh Bhattacharjee, Mita Nasipuri, Barin Kumar De, Mrinal Kanti Bhowmik

Abstract:

Biometric face recognition technology (FRT) has gained a lot of attention due to its extensive variety of applications in both security and non-security perspectives. It has come into view to provide a secure solution in identification and verification of person identity. Although other biometric based methods like fingerprint scans, iris scans are available, FRT is verified as an efficient technology for its user-friendliness and contact freeness. Accurate facial feature localization plays an important role for many facial analysis applications including biometrics and emotion recognition. But, there are certain factors, which make facial feature localization a challenging task. On human face, expressions can be seen from the subtle movements of facial muscles and influenced by internal emotional states. These non-rigid facial movements cause noticeable alterations in locations of facial landmarks, their usual shapes, which sometimes create occlusions in facial feature areas making face recognition as a difficult problem. The paper proposes a new hybrid based technique for automatic landmark detection in both neutral and expressive frontal and near frontal face images. The method uses the concept of thresholding, sequential searching and other image processing techniques for locating the landmark points on the face. Also, a Graphical User Interface (GUI) based software is designed that could automatically detect 16 landmark points around eyes, nose and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Cohn Kanade database. Also, the system is tested on DeitY-TU face database which is created in the Biometrics Laboratory of Tripura University under the research project funded by Department of Electronics & Information Technology, Govt. of India. The performance of the proposed method has been done in terms of error measure and accuracy. The method has detection rate of 98.82% on JAFFE database, 91.27% on Cohn Kanade database and 93.05% on DeitY-TU database. Also, we have done comparative study of our proposed method with other techniques developed by other researchers. This paper will put into focus emotion-oriented systems through AU detection in future based on the located features.

Keywords: biometrics, face recognition, facial landmarks, image processing

Procedia PDF Downloads 412
25009 Impact of Depreciation Technique on Taxable Income and Financial Performance of Quoted Consumer Goods Company in Nigeria

Authors: Ibrahim Ali, Adamu Danlami Ahmed

Abstract:

This study examines the impact of depreciation on taxable income and financial performance of consumer goods companies quoted on the Nigerian stock exchange. The study adopts ex-post factor research design. Data were collected using a secondary source. The findings of the study suggest that, method of depreciation adopted in any organization influence the taxable profit. Depreciation techniques can either be: depressive, accelerative and linear depreciation. It was also recommended that consumer goods should adjust their method of depreciation to make sure an appropriate method is adopted. This will go a long way to revitalize their taxable profit.

Keywords: accelerated, linear, depressive, depreciation

Procedia PDF Downloads 285
25008 Stability-Indicating High-Performance Thin-Layer Chromatography Method for Estimation of Naftopidil

Authors: P. S. Jain, K. D. Bobade, S. J. Surana

Abstract:

A simple, selective, precise and Stability-indicating High-performance thin-layer chromatographic method for analysis of Naftopidil both in a bulk and in pharmaceutical formulation has been developed and validated. The method employed, HPTLC aluminium plates precoated with silica gel as the stationary phase. The solvent system consisted of hexane: ethyl acetate: glacial acetic acid (4:4:2 v/v). The system was found to give compact spot for Naftopidil (Rf value of 0.43±0.02). Densitometric analysis of Naftopidil was carried out in the absorbance mode at 253 nm. The linear regression analysis data for the calibration plots showed good linear relationship with r2=0.999±0.0001 with respect to peak area in the concentration range 200-1200 ng per spot. The method was validated for precision, recovery and robustness. The limits of detection and quantification were 20.35 and 61.68 ng per spot, respectively. Naftopidil was subjected to acid and alkali hydrolysis, oxidation and thermal degradation. The drug undergoes degradation under acidic, basic, oxidation and thermal conditions. This indicates that the drug is susceptible to acid, base, oxidation and thermal conditions. The degraded product was well resolved from the pure drug with significantly different Rf value. Statistical analysis proves that the method is repeatable, selective and accurate for the estimation of investigated drug. The proposed developed HPTLC method can be applied for identification and quantitative determination of Naftopidil in bulk drug and pharmaceutical formulation.

Keywords: naftopidil, HPTLC, validation, stability, degradation

Procedia PDF Downloads 400
25007 Vehicle Gearbox Fault Diagnosis Based on Cepstrum Analysis

Authors: Mohamed El Morsy, Gabriela Achtenová

Abstract:

Research on damage of gears and gear pairs using vibration signals remains very attractive, because vibration signals from a gear pair are complex in nature and not easy to interpret. Predicting gear pair defects by analyzing changes in vibration signal of gears pairs in operation is a very reliable method. Therefore, a suitable vibration signal processing technique is necessary to extract defect information generally obscured by the noise from dynamic factors of other gear pairs. This article presents the value of cepstrum analysis in vehicle gearbox fault diagnosis. Cepstrum represents the overall power content of a whole family of harmonics and sidebands when more than one family of sidebands is present at the same time. The concept for the measurement and analysis involved in using the technique are briefly outlined. Cepstrum analysis is used for detection of an artificial pitting defect in a vehicle gearbox loaded with different speeds and torques. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers introduce the load on the flanges of the output joint shafts. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. Also, a method for fault diagnosis of gear faults is presented based on order cepstrum. The procedure is illustrated with the experimental vibration data of the vehicle gearbox. The results show the effectiveness of cepstrum analysis in detection and diagnosis of the gear condition.

Keywords: cepstrum analysis, fault diagnosis, gearbox, vibration signals

Procedia PDF Downloads 379
25006 Optimizing Communications Overhead in Heterogeneous Distributed Data Streams

Authors: Rashi Bhalla, Russel Pears, M. Asif Naeem

Abstract:

In this 'Information Explosion Era' analyzing data 'a critical commodity' and mining knowledge from vertically distributed data stream incurs huge communication cost. However, an effort to decrease the communication in the distributed environment has an adverse influence on the classification accuracy; therefore, a research challenge lies in maintaining a balance between transmission cost and accuracy. This paper proposes a method based on Bayesian inference to reduce the communication volume in a heterogeneous distributed environment while retaining prediction accuracy. Our experimental evaluation reveals that a significant reduction in communication can be achieved across a diverse range of dataset types.

Keywords: big data, bayesian inference, distributed data stream mining, heterogeneous-distributed data

Procedia PDF Downloads 161
25005 Calibration of Discrete Element Method Parameters for Modelling DRI Pellets Flow

Authors: A. Hossein Madadi-Najafabadi, Masoud Nasiri

Abstract:

The discrete element method is a powerful technique for numerical modeling the flow of granular materials such as direct reduced iron. It would enable us to study processes and equipment related to the production and handling of the material. However, the characteristics and properties of the granules have to be adjusted precisely to achieve reliable results in a DEM simulation. The main properties for DEM simulation are size distribution, density, Young's modulus, Poisson's ratio and the contact coefficients of restitution, rolling friction and sliding friction. In the present paper, the mentioned properties are determined for DEM simulation of DRI pellets. A reliable DEM simulation would contribute to optimizing the handling system of DRIs in an iron-making plant. Among the mentioned properties, Young's modulus is the most important parameter, which is usually hard to get for particulate solids. Here, an especial method is utilized to precisely determine this parameter for DRI.

Keywords: discrete element method, direct reduced iron, simulation parameters, granular material

Procedia PDF Downloads 180
25004 Developing Digital Twins of Steel Hull Processes

Authors: V. Ložar, N. Hadžić, T. Opetuk, R. Keser

Abstract:

The development of digital twins strongly depends on efficient algorithms and their capability to mirror real-life processes. Nowadays, such efforts are required to establish factories of the future faced with new demands of custom-made production. The ship hull processes face these challenges too. Therefore, it is important to implement design and evaluation approaches based on production system engineering. In this study, the recently developed finite state method is employed to describe the stell hull process as a platform for the implementation of digital twinning technology. The application is justified by comparing the finite state method with the analytical approach. This method is employed to rebuild a model of a real shipyard ship hull process using a combination of serial and splitting lines. The key performance indicators such as the production rate, work in process, probability of starvation, and blockade are calculated and compared to the corresponding results obtained through a simulation approach using the software tool Enterprise dynamics. This study confirms that the finite state method is a suitable tool for digital twinning applications. The conclusion highlights the advantages and disadvantages of methods employed in this context.

Keywords: digital twin, finite state method, production system engineering, shipyard

Procedia PDF Downloads 99
25003 Development and Validation Method for Quantitative Determination of Rifampicin in Human Plasma and Its Application in Bioequivalence Test

Authors: Endang Lukitaningsih, Fathul Jannah, Arief R. Hakim, Ratna D. Puspita, Zullies Ikawati

Abstract:

Rifampicin is a semisynthetic antibiotic derivative of rifamycin B produced by Streptomyces mediterranei. RIF has been used worldwide as first line drug-prescribed throughout tuberculosis therapy. This study aims to develop and to validate an HPLC method couple with a UV detection for determination of rifampicin in spiked human plasma and its application for bioequivalence study. The chromatographic separation was achieved on an RP-C18 column (LachromHitachi, 250 x 4.6 mm., 5μm), utilizing a mobile phase of phosphate buffer/acetonitrile (55:45, v/v, pH 6.8 ± 0.1) at a flow of 1.5 mL/min. Detection was carried out at 337 nm by using spectrophotometer. The developed method was statistically validated for the linearity, accuracy, limit of detection, limit of quantitation, precise and specifity. The specifity of the method was ascertained by comparing chromatograms of blank plasma and plasma containing rifampicin; the matrix and rifampicin were well separated. The limit of detection and limit of quantification were 0.7 µg/mL and 2.3 µg/mL, respectively. The regression curve of standard was linear (r > 0.999) over a range concentration of 20.0 – 100.0 µg/mL. The mean recovery of the method was 96.68 ± 8.06 %. Both intraday and interday precision data showed reproducibility (R.S.D. 2.98% and 1.13 %, respectively). Therefore, the method can be used for routine analysis of rifampicin in human plasma and in bioequivalence study. The validated method was successfully applied in pharmacokinetic and bioequivalence study of rifampicin tablet in a limited number of subjects (under an Ethical Clearance No. KE/FK/6201/EC/2015). The mean values of Cmax, Tmax, AUC(0-24) and AUC(o-∞) for the test formulation of rifampicin were 5.81 ± 0.88 µg/mL, 1.25 hour, 29.16 ± 4.05 µg/mL. h. and 29.41 ± 4.07 µg/mL. h., respectively. Meanwhile for the reference formulation, the values were 5.04 ± 0.54 µg/mL, 1.31 hour, 27.20 ± 3.98 µg/mL.h. and 27.49 ± 4.01 µg/mL.h. From bioequivalence study, the 90% CIs for the test formulation/reference formulation ratio for the logarithmic transformations of Cmax and AUC(0-24) were 97.96-129.48% and 99.13-120.02%, respectively. According to the bioequivamence test guidelines of the European Commission-European Medicines Agency, it can be concluded that the test formulation of rifampicin is bioequivalence with the reference formulation.

Keywords: validation, HPLC, plasma, bioequivalence

Procedia PDF Downloads 290
25002 Meeting User’s Information Need: A Study on the Acceptance of Mobile Library Service at UGM Library

Authors: M. Fikriansyah Wicaksono, Rafael Arief Budiman, M. Very Setiawan

Abstract:

Currently, a wide range of innovative mobile library (M-Library) service is provided for the users in the library. The M-Library service is an innovation that aims to bring the collections of the library to users who currently use their smartphone so often. With M-Library services, it is expected that the users can fulfill their information needs more conveniently and practically. This study aims to find out how users use M-Library services provided by UGM library. This study applied a quantitative approach to investigate how to use the application M-Library. The Technology Acceptance Model (TAM) theory is applied to perform the analysis in terms of perceived usefulness, perceived ease of use, attitude towards behavior, behavioral intention and actual system usage. The results show that overall the users found that the M-Library application is useful to meet their information needs. Such as facilitate user to access e-resources, search UGM library collections, online booking collections, and reminder for returning book.

Keywords: m-library, mobile library services, technology acceptance, library of UGM

Procedia PDF Downloads 229
25001 Use of Cobalt Graphene in Place of Platnium in Catalytic Converter

Authors: V. Srinivasan, S. M. Sriram Nandan

Abstract:

Today in the modern world the most important problem faced by the mankind is increasing the pollution in a very high rate. It affects the ecosystem of the environment and also aids to increase the greenhouse effect. The exhaust gases from the automobile is the major cause of a pollution. Automobiles have increased to a large number which has increased the pollution of our world to an alarming rate. There are two methods of controlling the pollution namely, pre-pollution control method and post-pollution control method. This paper is based on controlling the emission by post-pollution control method. The ratio of surface area of nanoparticles to the volume of the nanoparticles is inversely proportional to the radius of the nanoparticles. So decreasing the radius, this ratio is leading resulting in an increased rate of reaction and thus the concentration of the pollution is decreased. To achieve this objective, use of cobalt-graphene element is proposed. The proposed method is mainly to decrease the cost of platinum as it is expensive. This has a longer life than the platinum-based catalysts.

Keywords: automobile emissions, catalytic converter, cobalt-graphene, replacement of platinum

Procedia PDF Downloads 389
25000 Perceptual Image Coding by Exploiting Internal Generative Mechanism

Authors: Kuo-Cheng Liu

Abstract:

In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.

Keywords: internal generative mechanism, structure-based spatial masking, visibility threshold, wavelet domain

Procedia PDF Downloads 248