Search results for: Standard
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1677

Search results for: Standard

267 Legal Doctrine on Rylands v. Fletcher: One more time on Feasibility of a General Clause of Strict Liability in the UK

Authors: Maria Lubomira Kubica

Abstract:

The paper reveals the birth and evolution of the British precedent Rylands v. Fletcher that, once adopted on the other side of the Ocean (in United States), gave rise to a general clause of liability for abnormally dangerous activities recognized by the §20 of the American Restatements of the Law Third, Liability for Physical and Emotional Harm. The main goal of the paper was to analyze the development of the legal doctrine and of the case law posterior to the precedent together with the intent of the British judicature to leapfrog from the traditional rule contained in Rylands v. Fletcher to a general clause similar to that introduced in the United States and recently also on the European level. As it is well known, within the scope of tort law two different initiatives compete with the aim of harmonizing the European laws: European Group on Tort Law with its Principles of European Tort Law (hereinafter PETL) in which article 5:101 sets forth a general clause for strict liability for abnormally dangerous activities and Study Group on European Civil Code with its Common Frame of Reference (CFR) which promotes rather ad hoc model of listing out determined cases of strict liability. Very narrow application scope of the art. 5:101 PETL, restricted only to abnormally dangerous activities, stays in opposition to very broad spectrum of strict liability cases governed by the CFR. The former is a perfect example of a general clause that offers a minimum and basic standard, possibly acceptable also in those countries in which, like in the United Kingdom, this regime of liability is completely marginalized.

Keywords: Abnormally dangerous activities, general clause, Rylands v. Fletcher, strict liability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2052
266 4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea

Authors: Rodrigo Pacheco-Ruiz, Jonathan Adams, Felix Pedrotti

Abstract:

This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool.  Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.

Keywords: 4D modelling, Black Sea, maritime archaeology, underwater photogrammetry, Bronze Age, low visibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
265 Evaluation of Non-Staggered Body-Fitted Grid Based Solution Method in Application to Supercritical Fluid Flows

Authors: Suresh Sahu, Abhijeet M. Vaidya, Naresh K. Maheshwari

Abstract:

The efforts to understand the heat transfer behavior of supercritical water in supercritical water cooled reactor (SCWR) are ongoing worldwide to fulfill the future energy demand. The higher thermal efficiency of these reactors compared to a conventional nuclear reactor is one of the driving forces for attracting the attention of nuclear scientists. In this work, a solution procedure has been described for solving supercritical fluid flow problems in complex geometries. The solution procedure is based on non-staggered grid. All governing equations are discretized by finite volume method (FVM) in curvilinear coordinate system. Convective terms are discretized by first-order upwind scheme and central difference approximation has been used to discretize the diffusive parts. k-ε turbulence model with standard wall function has been employed. SIMPLE solution procedure has been implemented for the curvilinear coordinate system. Based on this solution method, 3-D Computational Fluid Dynamics (CFD) code has been developed. In order to demonstrate the capability of this CFD code in supercritical fluid flows, heat transfer to supercritical water in circular tubes has been considered as a test problem. Results obtained by code have been compared with experimental results reported in literature.

Keywords: Curvilinear coordinate, body-fitted mesh, momentum interpolation, non-staggered grid, supercritical fluids.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 671
264 Information Dissemination System (IDS) Based E-Learning in Agricultural of Iran (Perception of Iranian Extension Agents)

Authors: A. R. Ommani, M. Chizari

Abstract:

The purpose of the study reported here was designing Information Dissemination System (IDS) based E-learning in agricultural of Iran. A questionnaire was developed to designing Information Dissemination System. The questionnaire was distributed to 96 extension agents who work for Management of Extension and Farming System of Khuzestan province of Iran. Data collected were analyzed using the Statistical Package for the Social Sciences (SPSS). Appropriate statistical procedures for description (frequencies, percent, means, and standard deviations) were used. In this study there was a significant relationship between the age , IT skill and knowledge, years of extension work, the extend of information seeking motivation, level of job satisfaction and level of education with use of information technology by extension agent. According to extension agents five factors were ranked respectively as five top essential items to designing Information Dissemination System (IDS) based E-learning in agricultural of Iran. These factors include: 1) Establish communication between farmers, coordinators (extension agents), agricultural experts, research centers, and community by information technology. 2) The communication between all should be mutual. 3) The information must be based farmers need. 4) Internet used as a facility to transfer the advanced agricultural information to the farming community. 5) Farmers can be illiterate and speak a local and they are not expected to use the system directly. Knowledge produced by the agricultural scientist must be transformed in to computer understandable presentation. To designing Information Dissemination System, electronic communication, in the agricultural society and rural areas must be developed. This communication must be mutual between all factors.

Keywords: E-learning, information dissemination system, information technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2391
263 A Review on Recycled Use of Solid Wastes in Building Materials

Authors: Oriyomi M. Okeyinka, David A. Oloke, Jamal M. Khatib

Abstract:

Large quantities of solid wastes being generated worldwide from sources such as household, domestic, industrial, commercial and construction demolition activities, leads to environmental concerns. Utilization of these wastes in making building construction materials can reduce the magnitude of the associated problems. When these waste products are used in place of other conventional materials, natural resources and energy are preserved and expensive and/or potentially harmful waste disposal is avoided. Recycling which is regarded as the third most preferred waste disposal option, with its numerous environmental benefits, stand as a viable option to offset the environmental impact associated with the construction industry. This paper reviews the results of laboratory tests and important research findings, and the potential of using these wastes in building construction materials with focus on sustainable development. Research gaps, which includes; the need to develop standard mix design for solid waste based building materials; the need to develop energy efficient method of processing solid waste use in concrete; the need to study the actual behavior or performance of such building materials in practical application and the limited real life application of such building materials have also been identified. A research is being proposed to develop an environmentally friendly, lightweight building block from recycled waste paper, without the use of cement, and with properties suitable for use as walling unit. This proposed research intends to incorporate, laboratory experimentation and modeling to address the identified research gaps.

Keywords: Recycling, solid waste, construction, building materials.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7114
262 A Proposal for a Secure and Interoperable Data Framework for Energy Digitalization

Authors: Hebberly Ahatlan

Abstract:

The process of digitizing energy systems involves transforming traditional energy infrastructure into interconnected, data-driven systems that enhance efficiency, sustainability, and responsiveness. As smart grids become increasingly integral to the efficient distribution and management of electricity from both fossil and renewable energy sources, the energy industry faces strategic challenges associated with digitalization and interoperability — particularly in the context of modern energy business models, such as virtual power plants (VPPs). The critical challenge in modern smart grids is to seamlessly integrate diverse technologies and systems, including virtualization, grid computing and service-oriented architecture (SOA), across the entire energy ecosystem. Achieving this requires addressing issues like semantic interoperability, Information Technology (IT) and Operational Technology (OT) convergence, and digital asset scalability, all while ensuring security and risk management. This paper proposes a four-layer digitalization framework to tackle these challenges, encompassing persistent data protection, trusted key management, secure messaging, and authentication of IoT resources. Data assets generated through this framework enable AI systems to derive insights for improving smart grid operations, security, and revenue generation. Furthermore, this paper also proposes a Trusted Energy Interoperability Alliance as a universal guiding standard in the development of this digitalization framework to support more dynamic and interoperable energy markets.

Keywords: Digitalization, IT/OT convergence, semantic interoperability, TEIA alliance, VPP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 122
261 Initiative Strategies on How to Increasing Value Add of the Recycling Business

Authors: Yananda Siraphatthada

Abstract:

The current study was the succession of a previous study on value added of recycling business management. Its aims are to 1) explore conditions on how to increasing value add of Thai recycling business, and 2) exam the implementation of the 3-staged plan (short, medium, and long term), suggested by the former study, to increase value added of the recycling business as immediate mechanisms to accelerate government operation. Quantitative and qualitative methods were utilized in this research. A qualitative research consisted of in-depth interviews and focus group discussions. Responses were obtained from owners of the waste separation plants, and recycle shops, as well as officers in relevant governmental agencies. They were randomly selected via Quota Sampling. Data was analyzed via content analysis. The sample used for quantitative method consisted of 1,274 licensed recycling operators in eight provinces. The operators were randomly stratified via sampling method. Data were analyzed via descriptive statistics frequency, percentage, average (Mean) and standard deviation.The study recommended three-staged plan: short, medium, and long terms. The plan included the development of logistics, the provision of quality market/plants, the amendment of recycling rules/regulation, the restructuring recycling business, the establishment of green-purchasing recycling center, support for the campaigns run by the International Green Purchasing Network (IGPN), conferences/workshops as a public forum to share insights among experts/concern people.

Keywords: Strategies, Value Added, Recycle Business.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1723
260 Indian License Plate Detection and Recognition Using Morphological Operation and Template Matching

Authors: W. Devapriya, C. Nelson Kennedy Babu, T. Srihari

Abstract:

Automatic License plate recognition (ALPR) is a technology which recognizes the registration plate or number plate or License plate of a vehicle. In this paper, an Indian vehicle number plate is mined and the characters are predicted in efficient manner. ALPR involves four major technique i) Pre-processing ii) License Plate Location Identification iii) Individual Character Segmentation iv) Character Recognition. The opening phase, named pre-processing helps to remove noises and enhances the quality of the image using the conception of Morphological Operation and Image subtraction. The second phase, the most puzzling stage ascertain the location of license plate using the protocol Canny Edge detection, dilation and erosion. In the third phase, each characters characterized by Connected Component Approach (CCA) and in the ending phase, each segmented characters are conceptualized using cross correlation template matching- a scheme specifically appropriate for fixed format. Major application of ALPR is Tolling collection, Border Control, Parking, Stolen cars, Enforcement, Access Control, Traffic control. The database consists of 500 car images taken under dissimilar lighting condition is used. The efficiency of the system is 97%. Our future focus is Indian Vehicle License Plate Validation (Whether License plate of a vehicle is as per Road transport and highway standard).

Keywords: Automatic License plate recognition, Character recognition, Number plate Recognition, Template matching, morphological operation, canny edge detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2406
259 Implementation of a Paraconsistent-Fuzzy Digital PID Controller in a Level Control Process

Authors: H. M. Côrtes, J. I. Da Silva Filho, M. F. Blos, B. S. Zanon

Abstract:

In a modern society the factor corresponding to the increase in the level of quality in industrial production demand new techniques of control and machinery automation. In this context, this work presents the implementation of a Paraconsistent-Fuzzy Digital PID controller. The controller is based on the treatment of inconsistencies both in the Paraconsistent Logic and in the Fuzzy Logic. Paraconsistent analysis is performed on the signals applied to the system inputs using concepts from the Paraconsistent Annotated Logic with annotation of two values (PAL2v). The signals resulting from the paraconsistent analysis are two values defined as Dc - Degree of Certainty and Dct - Degree of Contradiction, which receive a treatment according to the Fuzzy Logic theory, and the resulting output of the logic actions is a single value called the crisp value, which is used to control dynamic system. Through an example, it was demonstrated the application of the proposed model. Initially, the Paraconsistent-Fuzzy Digital PID controller was built and tested in an isolated MATLAB environment and then compared to the equivalent Digital PID function of this software for standard step excitation. After this step, a level control plant was modeled to execute the controller function on a physical model, making the tests closer to the actual. For this, the control parameters (proportional, integral and derivative) were determined for the configuration of the conventional Digital PID controller and of the Paraconsistent-Fuzzy Digital PID, and the control meshes in MATLAB were assembled with the respective transfer function of the plant. Finally, the results of the comparison of the level control process between the Paraconsistent-Fuzzy Digital PID controller and the conventional Digital PID controller were presented.

Keywords: Fuzzy logic, paraconsistent annotated logic, level control, digital PID.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1237
258 An Approach to Secure Mobile Agent Communication in Multi-Agent Systems

Authors: Olumide Simeon Ogunnusi, Shukor Abd Razak, Michael Kolade Adu

Abstract:

Inter-agent communication manager facilitates communication among mobile agents via message passing mechanism. Until now, all Foundation for Intelligent Physical Agents (FIPA) compliant agent systems are capable of exchanging messages following the standard format of sending and receiving messages. Previous works tend to secure messages to be exchanged among a community of collaborative agents commissioned to perform specific tasks using cryptosystems. However, the approach is characterized by computational complexity due to the encryption and decryption processes required at the two ends. The proposed approach to secure agent communication allows only agents that are created by the host agent server to communicate via the agent communication channel provided by the host agent platform. These agents are assumed to be harmless. Therefore, to secure communication of legitimate agents from intrusion by external agents, a 2-phase policy enforcement system was developed. The first phase constrains the external agent to run only on the network server while the second phase confines the activities of the external agent to its execution environment. To implement the proposed policy, a controller agent was charged with the task of screening any external agent entering the local area network and preventing it from migrating to the agent execution host where the legitimate agents are running. On arrival of the external agent at the host network server, an introspector agent was charged to monitor and restrain its activities. This approach secures legitimate agent communication from Man-in-the Middle and Replay attacks.

Keywords: Agent communication, introspective agent, isolation of agent, policy enforcement system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 645
257 Numerical Modeling of Determination of in situ Rock Mass Deformation Modulus Using the Plate Load Test

Authors: A. Khodabakhshi, A. Mortazavi

Abstract:

Accurate determination of rock mass deformation modulus, as an important design parameter, is one of the most controversial issues in most engineering projects. A 3D numerical model of standard plate load test (PLT) using the FLAC3D code was carried to investigate the mechanism governing the test process. Five objectives were the focus of this study. The first goal was to employ 3D modeling in the interpretation of PLT conducted at the Bazoft dam site, Iran. The second objective was to investigate the effect of displacements measuring depth from the loading plates on the calculated moduli. The magnitude of rock mass deformation modulus calculated from PLT depends on anchor depth, and in practice, this may be a cause of error in the selection of realistic deformation modulus for the rock mass. The third goal of the study was to investigate the effect of testing plate diameter on the calculated modulus. Moreover, a comparison of the calculated modulus from ISRM formula, numerical modeling and calculated modulus from the actual PLT carried out at right abutment of the Bazoft dam site was another objective of the study. Finally, the effect of plastic strains on the calculated moduli in each of the loading-unloading cycles for three loading plates was investigated. The geometry, material properties, and boundary conditions on the constructed 3D model were selected based on the in-situ conditions of PLT at Bazoft dam site. A good agreement was achieved between numerical model results and the field tests results.

Keywords: Deformation modulus, numerical model, plate loading test, rock mass.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 772
256 Interoperable CNC System for Turning Operations

Authors: Yusri Yusof, Stephen Newman, Aydin Nassehi, Keith Case

Abstract:

The changing economic climate has made global manufacturing a growing reality over the last decade, forcing companies from east and west and all over the world to collaborate beyond geographic boundaries in the design, manufacture and assemble of products. The ISO10303 and ISO14649 Standards (STEP and STEP-NC) have been developed to introduce interoperability into manufacturing enterprises so as to meet the challenge of responding to production on demand. This paper describes and illustrates a STEP compliant CAD/CAPP/CAM System for the manufacture of rotational parts on CNC turning centers. The information models to support the proposed system together with the data models defined in the ISO14649 standard used to create the NC programs are also described. A structured view of a STEP compliant CAD/CAPP/CAM system framework supporting the next generation of intelligent CNC controllers for turn/mill component manufacture is provided. Finally a proposed computational environment for a STEP-NC compliant system for turning operations (SCSTO) is described. SCSTO is the experimental part of the research supported by the specification of information models and constructed using a structured methodology and object-oriented methods. SCSTO was developed to generate a Part 21 file based on machining features to support the interactive generation of process plans utilizing feature extraction. A case study component has been developed to prove the concept for using the milling and turning parts of ISO14649 to provide a turn-mill CAD/CAPP/CAM environment.

Keywords:

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1990
255 A Perceptually Optimized Wavelet Embedded Zero Tree Image Coder

Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf

Abstract:

In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Keywords: DWT, linear-phase 9/7 filter, 9/7 Wavelets Error Sensitivity WES, CSF implementation approaches, JND Just Noticeable Difference, Luminance masking, Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051
254 Hybrid Equity Warrants Pricing Formulation under Stochastic Dynamics

Authors: Teh Raihana Nazirah Roslan, Siti Zulaiha Ibrahim, Sharmila Karim

Abstract:

A warrant is a financial contract that confers the right but not the obligation, to buy or sell a security at a certain price before expiration. The standard procedure to value equity warrants using call option pricing models such as the Black–Scholes model had been proven to contain many flaws, such as the assumption of constant interest rate and constant volatility. In fact, existing alternative models were found focusing more on demonstrating techniques for pricing, rather than empirical testing. Therefore, a mathematical model for pricing and analyzing equity warrants which comprises stochastic interest rate and stochastic volatility is essential to incorporate the dynamic relationships between the identified variables and illustrate the real market. Here, the aim is to develop dynamic pricing formulations for hybrid equity warrants by incorporating stochastic interest rates from the Cox-Ingersoll-Ross (CIR) model, along with stochastic volatility from the Heston model. The development of the model involves the derivations of stochastic differential equations that govern the model dynamics. The resulting equations which involve Cauchy problem and heat equations are then solved using partial differential equation approaches. The analytical pricing formulas obtained in this study comply with the form of analytical expressions embedded in the Black-Scholes model and other existing pricing models for equity warrants. This facilitates the practicality of this proposed formula for comparison purposes and further empirical study.

Keywords: Cox-Ingersoll-Ross model, equity warrants, Heston model, hybrid models, stochastic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 586
253 Quality Management in Spice Paprika Production as a Synergy of Internal and External Quality Measures

Authors: É. Kónya, E. Szabó, I. Bata-Vidács, T. Deák, M. Ottucsák, N. Adányi, A. Székács

Abstract:

Spice paprika is a major spice commodity in the European Union (EU), produced locally and imported from non-EU countries, reported not only for chemical and microbiological contamination, but also for fraud. The effective interaction between producers’ quality management practices and government and EU activities is described on the example of spice paprika production and control in Hungary, a country of leading spice paprika producer and per capita consumer in Europe. To demonstrate the importance of various contamination factors in the Hungarian production and EU trade of spice paprika, several aspects concerning food safety of this commodity are presented. Alerts in the Rapid Alert System for Food and Feed (RASFF) of the EU between 2005 and 2013, as well as Hungarian state inspection results on spice paprika in 2004 are discussed, and quality non-compliance claims regarding spice paprika among EU member states are summarized in by means of network analysis. Quality assurance measures established along the spice paprika production technology chain at the leading Hungarian spice paprika manufacturer, Kalocsai Fűszerpaprika Zrt. are surveyed with main critical control points identified. The structure and operation of the Hungarian state food safety inspection system is described. Concerted performance of the latter two quality management systems illustrates the effective interaction between internal (manufacturer) and external (state) quality control measures.

Keywords: Spice paprika, quality control, reporting mechanisms, RASFF, vulnerable points, HACCP, BRC Global Standard.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1970
252 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography

Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi

Abstract:

Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.

Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619
251 Diagnostic Contribution of the MMSE-2:EV in the Detection and Monitoring of the Cognitive Impairment: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

The goal of this paper is to present the diagnostic contribution that the screening instrument, Mini-Mental State Examination-2: Expanded Version (MMSE-2:EV), brings in detecting the cognitive impairment or in monitoring the progress of degenerative disorders. The diagnostic signification is underlined by the interpretation of the MMSE-2:EV scores, resulted from the test application to patients with mild and major neurocognitive disorders. The cases were selected from current practice, in order to cover vast and significant neurocognitive pathology: mild cognitive impairment, Alzheimer’s disease, vascular dementia, mixed dementia, Parkinson’s disease, conversion of the mild cognitive impairment into Alzheimer’s disease. The MMSE-2:EV version was used: it was applied one month after the initial assessment, three months after the first reevaluation and then every six months, alternating the blue and red forms. Correlated with age and educational level, the raw scores were converted in T scores and then, with the mean and the standard deviation, the z scores were calculated. The differences of raw scores between the evaluations were analyzed from the point of view of statistic signification, in order to establish the progression in time of the disease. The results indicated that the psycho-diagnostic approach for the evaluation of the cognitive impairment with MMSE-2:EV is safe and the application interval is optimal. In clinical settings with a large flux of patients, the application of the MMSE-2:EV is a safe and fast psychodiagnostic solution. The clinicians can draw objective decisions and for the patients: it does not take too much time and energy, it does not bother them and it doesn’t force them to travel frequently.

Keywords: MMSE-2, dementia, cognitive impairment, neuropsychology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3690
250 Electricity Load Modeling: An Application to Italian Market

Authors: Giovanni Masala, Stefania Marica

Abstract:

Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.

Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486
249 A Study on Cement-Based Composite Containing Polypropylene Fibers and Finely Ground Glass Exposed to Elevated Temperatures

Authors: O. Alidoust, I. Sadrinejad, M. A. Ahmadi

Abstract:

High strength concrete has been used in situations where it may be exposed to elevated temperatures. Numerous authors have shown the significant contribution of polypropylene fiber to the spalling resistance of high strength concrete. When cement-based composite that reinforced by polypropylene fibers heated up to 170 °C, polypropylene fibers readily melt and volatilize, creating additional porosity and small channels in to the matrix that cause the poor structure and low strength. This investigation develops on the mechanical properties of mortar incorporating polypropylene fibers exposed to high temperature. Also effects of different pozzolans on strength behaviour of samples at elevated temperature have been studied. To reach this purpose, the specimens were produced by partial replacement of cement with finely ground glass, silica fume and rice husk ash as high reactive pozzolans. The amount of this replacement was 10% by weight of cement to find the effects of pozzolans as a partial replacement of cement on the mechanical properties of mortars. In this way, lots of mixtures with 0%, 0.5%, 1% and 1.5% of polypropylene fibers were cast and tested for compressive and flexural strength, accordance to ASTM standard. After that specimens being heated to temperatures of 300, 600 °C, respectively, the mechanical properties of heated samples were tested. Mechanical tests showed significant reduction in compressive strength which could be due to polypropylene fiber melting. Also pozzolans improve the mechanical properties of sampels.

Keywords: Mechanical properties, compressive strength, Flexural strength, pozzolanic behavior.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2176
248 Influence of Sr(BO2)2 Doping on Superconducting Properties of (Bi,Pb)-2223 Phase

Authors: N. G. Margiani, I. G. Kvartskhava, G. A. Mumladze, Z. A. Adamia

Abstract:

Chemical doping with different elements and compounds at various amounts represents the most suitable approach to improve the superconducting properties of bismuth-based superconductors for technological applications. In this paper, the influence of partial substitution of Sr(BO2)2 for SrO on the phase formation kinetics and transport properties of (Bi,Pb)-2223 HTS has been studied for the first time. Samples with nominal composition Bi1.7Pb0.3Sr2-xCa2Cu3Oy[Sr(BO2)2]x, x=0, 0.0375, 0.075, 0.15, 0.25, were prepared by the standard solid state processing. The appropriate mixtures were calcined at 845 oC for 40 h. The resulting materials were pressed into pellets and annealed at 837 oC for 30 h in air. Superconducting properties of undoped (reference) and Sr(BO2)2-doped (Bi,Pb)-2223 compounds were investigated through X-ray diffraction (XRD), resistivity (ρ) and transport critical current density (Jc) measurements. The surface morphology changes in the prepared samples were examined by scanning electron microscope (SEM). XRD and Jc studies have shown that the low level Sr(BO2)2 doping (x=0.0375-0.075) to the Sr-site promotes the formation of high-Tc phase and leads to the enhancement of current carrying capacity in (Bi,Pb)-2223 HTS. The doped sample with x=0.0375 has the best performance compared to other prepared samples. The estimated volume fraction of (Bi,Pb)-2223 phase increases from ~25 % for reference specimen to ~70 % for x=0.0375. Moreover, strong increase in the self-field Jc value was observed for this dopant amount (Jc=340 A/cm2), compared to an undoped sample (Jc=110 A/cm2). Pronounced enhancement of superconducting properties of (Bi,Pb)-2223 superconductor can be attributed to the acceleration of high-Tc phase formation as well as the improvement of inter-grain connectivity by small amounts of Sr(BO2)2 dopant.

Keywords: Bismuth-based superconductor, critical current density, phase formation, Sr(BO2)2 doping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 756
247 Fast Factored DCT-LMS Speech Enhancement for Performance Enhancement of Digital Hearing Aid

Authors: Sunitha. S.L., V. Udayashankara

Abstract:

Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Cosine Transform Power Normalized Least Mean Square algorithm to improve the SNR and to reduce the convergence rate of the LMS for Sensory neural loss patients. Since it requires only real arithmetic, it establishes the faster convergence rate as compare to time domain LMS and also this transformation improves the eigenvalue distribution of the input autocorrelation matrix of the LMS filter. The DCT has good ortho-normal, separable, and energy compaction property. Although the DCT does not separate frequencies, it is a powerful signal decorrelator. It is a real valued function and thus can be effectively used in real-time operation. The advantages of DCT-LMS as compared to standard LMS algorithm are shown via SNR and eigenvalue ratio computations. . Exploiting the symmetry of the basis functions, the DCT transform matrix [AN] can be factored into a series of ±1 butterflies and rotation angles. This factorization results in one of the fastest DCT implementation. There are different ways to obtain factorizations. This work uses the fast factored DCT algorithm developed by Chen and company. The computer simulations results show superior convergence characteristics of the proposed algorithm by improving the SNR at least 10 dB for input SNR less than and equal to 0 dB, faster convergence speed and better time and frequency characteristics.

Keywords: Hearing Impairment, DCT Adaptive filter, Sensorineural loss patients, Convergence rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2171
246 Determining the Maximum Lateral Displacement Due to Sever Earthquakes without Using Nonlinear Analysis

Authors: Mussa Mahmoudi

Abstract:

For Seismic design, it is important to estimate, maximum lateral displacement (inelastic displacement) of the structures due to sever earthquakes for several reasons. Seismic design provisions estimate the maximum roof and storey drifts occurring in major earthquakes by amplifying the drifts of the structures obtained by elastic analysis subjected to seismic design load, with a coefficient named “displacement amplification factor" which is greater than one. Here, this coefficient depends on various parameters, such as ductility and overstrength factors. The present research aims to evaluate the value of the displacement amplification factor in seismic design codes and then tries to propose a value to estimate the maximum lateral structural displacement from sever earthquakes, without using non-linear analysis. In seismic codes, since the displacement amplification is related to “force reduction factor" hence; this aspect has been accepted in the current study. Meanwhile, two methodologies are applied to evaluate the value of displacement amplification factor and its relation with the force reduction factor. In the first methodology, which is applied for all structures, the ratio of displacement amplification and force reduction factors is determined directly. Whereas, in the second methodology that is applicable just for R/C moment resisting frame, the ratio is obtained by calculating both factors, separately. The acquired results of these methodologies are alike and estimate the ratio of two factors from 1 to 1.2. The results indicate that the ratio of the displacement amplification factor and the force reduction factor differs to those proposed by seismic provisions such as NEHRP, IBC and Iranian seismic code (standard no. 2800).

Keywords: Displacement amplification factor, Ductility factor, Force reduction factor, Maximum lateral displacement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2888
245 A Process of Forming a Single Competitive Factor in the Digital Camera Industry

Authors: Kiyohiro Yamazaki

Abstract:

This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.

Keywords: Digital camera industry, product evolution trajectory, product platform, unification of competitive factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 653
244 Endeavor in Management Process by Executive Dashboards: The Case of the Financial Directorship in Brazilian Navy

Authors: R. S. Quintal, J. L. Tesch Santos, M. D. Davis, E. C. de Santana, M. de F. Bandeira dos Santos

Abstract:

The objective is to identify the contributions from the introduction of the computerized system deal within the Accounting Department of Brazilian Navy Financial Directorship and its possible effects on the budgetary and financial harvest of Brazilian Navy. The relevance lies in the fact that the management process is responsible for the continuous improvement of organizational performance through higher levels of quality in their activities. Improvements in organizational processes have direct effects on crops cost, quality, reliability, flexibility and speed. The method of study of this research is the case study. The choice of case study attended, among other demands, a need for greater flexibility to study processes related to a computerized system. The sources of evidence were used literature, documentary and direct observation. Direct observation was made by monitoring the implementation of the computerized system in the Division of Management Analysis. The main findings of the study point to the fact that the computerized system may contribute significantly to the standardization of information. There was improvement of internal processes in the division of management analysis, made possible the consolidation of a standard management and performance analysis that contribute to global homogeneity in the treatment of information essential to the process of decision making. This study has limitations related to the fact the search result be subject exclusively to the case studied, and it is impossible to generalize to other organs of government.

Keywords: Process Management, Management Control, Business Intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1985
243 Development of a Standardization Methodology Assessing the Comfort Performance for Hanok

Authors: Mi-Hyang Lee, Seung-Hoon Han

Abstract:

Korean traditional residences have been built with deep design issues for various values such as social, cultural, and environmental influences to be started from a few thousand years ago, but its meaning is being vanished due to the different lifestyles these days. It is necessary, therefore, to grasp the meaning of the Korea traditional building called Hanok and to get Korean people understand its real advantages. The purpose of this study is to propose a standardization methodology for evaluating comfort features towards Korean traditional houses. This paper is also trying to build an official standard evaluation system and to integrate aesthetic and psychological values induced from Hanok. Its comfort performance values could be divided into two large categories that are physical and psychological, and fourteen methods have been defined as the Korean Standards (KS). For this research, field survey data from representative Hanok types were collected for each method. This study also contains a qualitative in-depth analysis of the Hanok comfort index by the professions using AHP (Analytical Hierarchy Process) and has examined the effect of the methods. As a result, this paper could define what methods can provide trustful outcomes and how to evaluate the own strengths in aspects of spatial comfort of Hanok using suggested procedures towards the spatial configuration of the traditional dwellings. This study has finally proposed an integrated development of a standardization methodology assessing the comfort performance for Korean traditional residences, and it is expected that they could evaluate inhabitants of the residents and interior environmental conditions especially structured by wood materials like Hanok.

Keywords: Hanok, comfort performance, human condition, analytical hierarchy process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 909
242 Effect of Jatropha curcas Leaf Extract on Castor Oil Induced Diarrhea in Albino Rats

Authors: Fatima U. Maigari, Musa Halilu, M. Maryam Umar, Rabiu Zainab

Abstract:

Plants as therapeutic agents are used as drug in many parts of the world. Medicinal plants are mostly used in developing countries due to culture acceptability, belief or due to lack of easy access to primary health care services. Jatropha curcas is a plant from the Euphorbiaceae family which is widely used in Northern Nigeria as an anti-diarrheal agent. This study was conducted to determine the anti-diarrheal effect of the leaf extract on castor oil induced diarrhea in albino rats. The leaves of J. curcas were collected from Balanga Local government in Gombe State, north-eastern Nigeria; due to its bioavailability. The leaves were air-dried at room temperature and ground to powder. Phytochemical screening was done and different concentrations of the extract was prepared and administered to the different categories of experimental animals. From the results, aqueous leaf extract of Jatropha curcas at doses of 200mg/Kg and 400mg/Kg was found to reduce the mean stool score as compared to control rats, however, maximum reduction was achieved with the standard drug of Loperamide (5mg/Kg). Treatment of diarrhea with 200mg/Kg of the extract did not produce any significant decrease in stool fluid content but was found to be significant in those rats that were treated with 400mg/Kg of the extract at 2hours (0.05±0.02) and 4hours (0.01±0.01). A significant reduction of diarrhea in the experimental animals signifies it to possess some anti-diarrheal activity.

Keywords: Anti-diarrhea, Diarrhea, Jatropha curcas, Loperamide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1861
241 Co-payment Strategies for Chronic Medications: A Qualitative and Comparative Analysis at European Level

Authors: Pedro M. Abreu, Bruno R. Mendes

Abstract:

The management of pharmacotherapy and the process of dispensing medicines is becoming critical in clinical pharmacy due to the increase of incidence and prevalence of chronic diseases, the complexity and customization of therapeutic regimens, the introduction of innovative and more expensive medicines, the unbalanced relation between expenditure and revenue as well as due to the lack of rationalization associated with medication use. For these reasons, co-payments emerged in Europe in the 70s and have been applied over the past few years in healthcare. Co-payments lead to a rationing and rationalization of user’s access under healthcare services and products, and simultaneously, to a qualification and improvement of the services and products for the end-user. This analysis, under hospital practices particularly and co-payment strategies in general, was carried out on all the European regions and identified four reference countries, that apply repeatedly this tool and with different approaches. The structure, content and adaptation of European co-payments were analyzed through 7 qualitative attributes and 19 performance indicators, and the results expressed in a scorecard, allowing to conclude that the German models (total score of 68,2% and 63,6% in both elected co-payments) can collect more compliance and effectiveness, the English models (total score of 50%) can be more accessible, and the French models (total score of 50%) can be more adequate to the socio-economic and legal framework. Other European models did not show the same quality and/or performance, so were not taken as a standard in the future design of co-payments strategies. In this sense, we can see in the co-payments a strategy not only to moderate the consumption of healthcare products and services, but especially to improve them, as well as a strategy to increment the value that the end-user assigns to these services and products, such as medicines.

Keywords: Clinical pharmacy, co-payments, healthcare, medicines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1360
240 The Mass Attenuation Coefficients, Effective Atomic Cross Sections, Effective Atomic Numbers and Electron Densities of Some Halides

Authors: Shivalinge Gowda

Abstract:

The total mass attenuation coefficients m/r, of some halides such as, NaCl, KCl, CuCl, NaBr, KBr, RbCl, AgCl, NaI, KI, AgBr, CsI, HgCl2, CdI2 and HgI2 were determined at photon energies 279.2, 320.07, 514.0, 661.6, 1115.5, 1173.2 and 1332.5 keV in a well-collimated narrow beam good geometry set-up using a high resolution, hyper pure germanium detector. The mass attenuation coefficients and the effective atomic cross sections are found to be in good agreement with the XCOM values. From these mass attenuation coefficients, the effective atomic cross sections sa, of the compounds were determined. These effective atomic cross section sa data so obtained are then used to compute the effective atomic numbers Zeff. For this, the interpolation of total attenuation cross-sections of photons of energy E in elements of atomic number Z was performed by using the logarithmic regression analysis of the data measured by the authors and reported earlier for the above said energies along with XCOM data for standard energies. The best-fit coefficients in the photon energy range of 250 to 350 keV, 350 to 500 keV, 500 to 700 keV, 700 to 1000 keV and 1000 to 1500 keV by a piecewise interpolation method were then used to find the Zeff of the compounds with respect to the effective atomic cross section sa from the relation obtained by piece wise interpolation method. Using these Zeff values, the electron densities Nel of halides were also determined. The present Zeff and Nel values of halides are found to be in good agreement with the values calculated from XCOM data and other available published values.

Keywords: Mass attenuation coefficient, atomic cross-section, effective atomic number, electron density.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2122
239 Digital Automatic Gain Control Integrated on WLAN Platform

Authors: Emilija Miletic, Milos Krstic, Maxim Piz, Michael Methfessel

Abstract:

In this work we present a solution for DAGC (Digital Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4 GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used enables gain control over Low Noise Amplifier (LNA) and a Variable Gain Amplifier (VGA). The control over those signals is performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the average power of the baseband signal close to the desired set point. DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and actual gain setting, adjusting a gain factor of the accumulation, and applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.

Keywords: WLAN, AGC, RSSI, baseband processor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3949
238 An Extensible Software Infrastructure for Computer Aided Custom Monitoring of Patients in Smart Homes

Authors: Ritwik Dutta, Marilyn Wolf

Abstract:

This paper describes the tradeoffs and the design from scratch of a self-contained, easy-to-use health dashboard software system that provides customizable data tracking for patients in smart homes. The system is made up of different software modules and comprises a front-end and a back-end component. Built with HTML, CSS, and JavaScript, the front-end allows adding users, logging into the system, selecting metrics, and specifying health goals. The backend consists of a NoSQL Mongo database, a Python script, and a SimpleHTTPServer written in Python. The database stores user profiles and health data in JSON format. The Python script makes use of the PyMongo driver library to query the database and displays formatted data as a daily snapshot of user health metrics against target goals. Any number of standard and custom metrics can be added to the system, and corresponding health data can be fed automatically, via sensor APIs or manually, as text or picture data files. A real-time METAR request API permits correlating weather data with patient health, and an advanced query system is implemented to allow trend analysis of selected health metrics over custom time intervals. Available on the GitHub repository system, the project is free to use for academic purposes of learning and experimenting, or practical purposes by building on it.

Keywords: Flask, Java, JavaScript, health monitoring, long term care, Mongo, Python, smart home, software engineering, webserver.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2134