Search results for: automated testing
2788 A Study of NT-ProBNP and ETCO2 in Patients Presenting with Acute Dyspnoea
Authors: Dipti Chand, Riya Saboo
Abstract:
OBJECTIVES: Early and correct diagnosis may present a significant clinical challenge in diagnosis of patients presenting to Emergency Department with Acute Dyspnoea. The common cause of acute dyspnoea and respiratory distress in Emergency Department are Decompensated Heart Failure (HF), Chronic Obstructive Pulmonary Disease (COPD), Asthma, Pneumonia, Acute Respiratory Distress Syndrome (ARDS), Pulmonary Embolism (PE), and other causes like anaemia. The aim of the study was to measure NT-pro Brain Natriuretic Peptide (BNP) and exhaled End-Tidal Carbon dioxide (ETCO2) in patients presenting with dyspnoea. MATERIAL AND METHODS: This prospective, cross-sectional and observational study was performed at the Government Medical College and Hospital, Nagpur, between October 2019 and October 2021 in patients admitted to the Medicine Intensive Care Unit. Three groups of patients were compared: (1) HFrelated acute dyspnoea group (n = 52), (2) pulmonary (COPD/PE)-related acute dyspnoea group (n = 31) and (3) sepsis with ARDS-related dyspnoea group (n = 13). All patients underwent initial clinical examination with a recording of initial vital parameters along with on-admission ETCO2 measurement, NT-proBNP testing, arterial blood gas analysis, lung ultrasound examination, 2D echocardiography, chest X-rays, and other relevant diagnostic laboratory testing. RESULTS: 96 patients were included in the study. Median NT-proBNP was found to be high for the Heart Failure group (11,480 pg/ml), followed by the sepsis group (780 pg/ml), and pulmonary group had an Nt ProBNP of 231 pg/ml. The mean ETCO2 value was maximum in the pulmonary group (48.610 mmHg) followed by Heart Failure (31.51 mmHg) and the sepsis group (19.46 mmHg). The results were found to be statistically significant (P < 0.05). CONCLUSION: NT-proBNP has high diagnostic accuracy in differentiating acute HF-related dyspnoea from pulmonary (COPD and ARDS)-related acute dyspnoea. The higher levels of ETCO2 help in diagnosing patients with COPD.Keywords: NT PRO BNP, ETCO2, dyspnoea, lung USG
Procedia PDF Downloads 752787 Unearthing Air Traffic Control Officers Decision Instructional Patterns From Simulator Data for Application in Human Machine Teams
Authors: Zainuddin Zakaria, Sun Woh Lye
Abstract:
Despite the continuous advancements in automated conflict resolution tools, there is still a low rate of adoption of automation from Air Traffic Control Officers (ATCOs). Trust or acceptance in these tools and conformance to the individual ATCO preferences in strategy execution for conflict resolution are two key factors that impact their use. This paper proposes a methodology to unearth and classify ATCO conflict resolution strategies from simulator data of trained and qualified ATCOs. The methodology involves the extraction of ATCO executive control actions and the establishment of a system of strategy resolution classification based on ATCO radar commands and prevailing flight parameters in deconflicting a pair of aircraft. Six main strategies used to handle various categories of conflict were identified and discussed. It was found that ATCOs were about twice more likely to choose only vertical maneuvers in conflict resolution compared to horizontal maneuvers or a combination of both vertical and horizontal maneuvers.Keywords: air traffic control strategies, conflict resolution, simulator data, strategy classification system
Procedia PDF Downloads 1472786 RFID Laptop Monitoring and Management System
Authors: Francis E. Idachaba, Sarah Uyimeh Tommy
Abstract:
This paper describes the design of an RFID laptop monitoring and management system. Laptops embedded with RFID chips are monitored and tracked to provide a monitoring system for the purpose of tracking as well as monitoring movement of the laptops in and out of a building. The proposed system is implemented with both hardware and software components. The hardware architecture consists of RFID passive tag, RFID module (reader), and a server hosting the application and database. The RFID readers are distributed at major exits of a building or premises. The tags are programmed with owner laptop details are concealed in the laptops. The software architecture consists of application software that has the APIs (Applications Programming Interface) necessary to interface the RFID system with the PC, to achieve automated laptop monitoring system. A friendly graphic user interface (GUI) and a database that saves all readings and owners details. The system is capable of reducing laptop theft especially in students’ hostels as laptops can be monitored as they are taken either in or out of the building.Keywords: asset tracking, GUI, laptop monitoring, radio frequency identification, passive tags
Procedia PDF Downloads 3882785 End-to-End Control and Management of Multi-AS Virtual Service Networks Using SDN and Autonomic Computing Architecture
Authors: Yong Xue, Daniel A. Menascé
Abstract:
Automated and end-to-end network resource management and provisioning for virtual service networks in a multiple autonomous systems (a.k.a multi-AS) environment is a challenging and open problem. This paper proposes a novel, scalable and interoperable high-level architecture that incorporates a number of emerging enabling technologies including Software Defined Network (SDN), Network Function Virtualization (NFV), Service Oriented Architecture (SOA), and Autonomic Computing. The proposed architecture can be used to not only automate network resource management and provisioning for virtual service networks across multiple autonomous substrate networks, but also provide an adaptive capability for achieving optimal network resource management and maintaining network-level end-to-end network performance as well. The paper argues that this SDN and autonomic computing based architecture lays a solid foundation that can facilitate the development of the future Internet based on the pluralistic paradigm.Keywords: virtual network, software defined network, virtual service network, adaptive resource management, SOA, multi-AS, inter-domain
Procedia PDF Downloads 5312784 Reproducibility of Shear Strength Parameters Determined from CU Triaxial Tests: Evaluation of Results from Regression of Different Failure Stress Combinations
Authors: Henok Marie Shiferaw, Barbara Schneider-Muntau
Abstract:
Test repeatability and data reproducibility are a concern in many geotechnical laboratory tests due to inherent soil variability, inhomogeneous sample preparation and measurement inaccuracy. Test results on comparable test specimens vary to a considerable extent. Thus, also the derived shear strength parameters from triaxial tests are affected. In this contribution, we present the reproducibility of effective shear strength parameters from consolidated undrained triaxial tests on plain soil and cement-treated soil specimens. Six remolded test specimens were prepared for the plain soil and for the cement-treated soil. Conventional three levels of consolidation pressure testing were considered with an effective consolidation pressure of 100 kPa, 200 kPa and 300 kPa, respectively. At each effective consolidation pressure, two tests were done on comparable test specimens. Focus was laid on the same mean dry density and same water content during sample preparation for the two specimens. The cement-treated specimens were tested after 28 days of curing. Shearing of test specimens was carried out at a deformation rate of 0.4 mm/min after sample saturation at a back pressure of 900 kPa, followed by consolidation. The effective peak and residual shear strength parameters were then estimated from regression analysis of 21 different combinations of the failure stresses from the six tests conducted for both the plain soil and cement-treated soil samples. The 21 different stress combinations were constructed by picking three, four, five and six failure tresses at once at different combinations. Results indicate that the effective shear strength parameters estimated from the regression of different combinations of the failure stresses vary. Effective critical friction angle was found to be more consistent than effective peak friction angle with a smaller standard deviation. The reproducibility of the shear strength parameters for the cement-treated specimens was even lower than that of the untreated specimens.Keywords: shear strength parameters, test repeatability, data reproducibility, triaxial soil testing, cement improvement of soils
Procedia PDF Downloads 312783 Non-Destructive Test of Bar for Determination of Critical Compression Force Directed towards the Pole
Authors: Boris Blostotsky, Elia Efraim
Abstract:
The phenomenon of buckling of structural elements under compression is revealed in many cases of loading and found consideration in many structures and mechanisms. In the present work the method and results of dynamic test for buckling of bar loaded by a compression force directed towards the pole are considered. Experimental determination of critical force for such system has not been made previously. The tested object is a bar with semi-rigid connection to the base at one of its ends, and with a hinge moving along a circle at the other. The test includes measuring the natural frequency of the bar at different values of compression load. The lateral stiffness is calculated based on natural frequency and reduced mass on the bar's movable end. The critical load is determined by extrapolation the values of lateral stiffness up to zero value. For the experimental investigation the special test-bed was created that allows the stability testing at positive and negative curvature of the movable end's trajectory, as well as varying the rotational stiffness of the other end connection. Decreasing a friction at the movable end allows extend the diapason of applied compression force. The testing method includes: - Methodology of the experiment planning, that allows determine the required number of tests under various loads values in the defined range and the type of extrapolating function; - Methodology of experimental determination of reduced mass at the bar's movable end including its own mass; - Methodology of experimental determination of lateral stiffness of uncompressed bar rotational semi-rigid connection at the base. For planning the experiment and for comparison of the experimental results with the theoretical values of critical load, the analytical dependencies of lateral stiffness of the bar with defined end conditions on compression load. In the particular case of perfectly rigid connection of the bar to the base, the critical load value corresponds to solution by S.P. Timoshenko. Correspondence of the calculated and experimental values was obtained.Keywords: non-destructive test, buckling, dynamic method, semi-rigid connections
Procedia PDF Downloads 3542782 Statistical Quality Control on Assignable Causes of Variation on Cement Production in Ashaka Cement PLC Gombe State
Authors: Hamisu Idi
Abstract:
The present study focuses on studying the impact of influencer recommendation in the quality of cement production. Exploratory research was done on monthly basis, where data were obtained from secondary source i.e. the record kept by an automated recompilation machine. The machine keeps all the records of the mills downtime which the process manager checks for validation and refer the fault (if any) to the department responsible for maintenance or measurement taking so as to prevent future occurrence. The findings indicated that the product of the Ashaka Cement Plc. were considered as qualitative, since all the production processes were found to be in control (preset specifications) with the exception of the natural cause of variation which is normal in the production process as it will not affect the outcome of the product. It is reduced to the bearest minimum since it cannot be totally eliminated. It is also hopeful that the findings of this study would be of great assistance to the management of Ashaka cement factory and the process manager in particular at various levels in the monitoring and implementation of statistical process control. This study is therefore of great contribution to the knowledge in this regard and it is hopeful that it would open more research in that direction.Keywords: cement, quality, variation, assignable cause, common cause
Procedia PDF Downloads 2592781 Spatio-Temporal Dynamic of Woody Vegetation Assessment Using Oblique Landscape Photographs
Authors: V. V. Fomin, A. P. Mikhailovich, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova
Abstract:
Ground-level landscape photos can be used as a source of objective data on woody vegetation and vegetation dynamics. We proposed a method for processing, analyzing, and presenting ground photographs, which has the following advantages: 1) researcher has to form holistic representation of the study area in form of a set of interlapping ground-level landscape photographs; 2) it is necessary to define or obtain characteristics of the landscape, objects, and phenomena present on the photographs; 3) it is necessary to create new or supplement existing textual descriptions and annotations for the ground-level landscape photographs; 4) single or multiple ground-level landscape photographs can be used to develop specialized geoinformation layers, schematic maps or thematic maps; 5) it is necessary to determine quantitative data that describes both images as a whole, and displayed objects and phenomena, using algorithms for automated image analysis. It is suggested to match each photo with a polygonal geoinformation layer, which is a sector consisting of areas corresponding with parts of the landscape visible in the photos. Calculation of visibility areas is performed in a geoinformation system within a sector using a digital model of a study area relief and visibility analysis functions. Superposition of the visibility sectors corresponding with various camera viewpoints allows matching landscape photos with each other to create a complete and wholesome representation of the space in question. It is suggested to user-defined data or phenomenons on the images with the following superposition over the visibility sector in the form of map symbols. The technology of geoinformation layers’ spatial superposition over the visibility sector creates opportunities for image geotagging using quantitative data obtained from raster or vector layers within the sector with the ability to generate annotations in natural language. The proposed method has proven itself well for relatively open and clearly visible areas with well-defined relief, for example, in mountainous areas in the treeline ecotone. When the polygonal layers of visibility sectors for a large number of different points of photography are topologically superimposed, a layer of visibility of sections of the entire study area is formed, which is displayed in the photographs. Also, as a result of this overlapping of sectors, areas that did not appear in the photo will be assessed as gaps. According to the results of this procedure, it becomes possible to obtain information about the photos that display a specific area and from which points of photography it is visible. This information may be obtained either as a query on the map or as a query for the attribute table of the layer. The method was tested using repeated photos taken from forty camera viewpoints located on Ray-Iz mountain massif (Polar Urals, Russia) from 1960 until 2023. It has been successfully used in combination with other ground-based and remote sensing methods of studying the climate-driven dynamics of woody vegetation in the Polar Urals. Acknowledgment: This research was collaboratively funded by the Russian Ministry for Science and Education project No. FEUG-2023-0002 (image representation) and Russian Science Foundation project No. 24-24-00235 (automated textual description).Keywords: woody, vegetation, repeated, photographs
Procedia PDF Downloads 862780 Characterization of Lahar Sands for Reclamation Projects in the Manila Bay, Philippines
Authors: Julian Sandoval, Philipp Schober
Abstract:
Lahar sand (lahars) is a material that originates from volcanic debris flows. During and after a volcano eruption, the lahars can move at speeds up to 22 meters per hour or more, so they can easily cover extensive areas and destroy any structure in their path. Mount Pinatubo eruption (1991) brought lahars to its vicinities, and its use has been a matter of research ever since. Lahars are often disposed of for land reclamation projects in the Manila Bay, Philippines. After reclamation, some deep loss deposits may still present and they are prone to liquefaction. To mitigate the risk of liquefaction of such deposits, Vibro compaction has been proposed and used as a ground improvement technique. Cone penetration testing (CPT) campaigns are usually initiated to monitor the effectiveness of the ground improvement works by vibro compaction. The CPT cone resistance is used to analyses the in-situ relative density of the reclaimed sand before and after compaction. Available correlations between the CPT cone resistance and the relative density are only valid for non-crushable sands. Due to the partially crushable nature of lahars, the CPT data requires to be adjusted to allow for a correct interpretation of the CPT data. The objective of this paper is to characterize the chemical and mechanical properties of the lahar sands used for an ongoing project in the Port of Manila, which comprises reclamation activities using lahars from the east of Mount Pinatubo, it investigates their effect in the proposed correction factor. Additionally, numerous CPTs were carried out in a test trial and during the execution of the project. Based on this data, the influence of the grid spacing, compaction steps and the holding time on the compaction results are analyzed. Moreover, the so-called “aging effect” of the lahars is studied by comparing the results of the CPT testing campaign at different times after the vibro compaction activities. A considerable increase in the tip resistance of the CPT was observed over time.Keywords: vibro compaction, CPT, lahar sands, correction factor, chemical composition
Procedia PDF Downloads 2312779 Investigation of Shear Strength, and Dilative Behavior of Coarse-grained Samples Using Laboratory Test and Machine Learning Technique
Authors: Ehsan Mehryaar, Seyed Armin Motahari Tabari
Abstract:
Coarse-grained soils are known and commonly used in a wide range of geotechnical projects, including high earth dams or embankments for their high shear strength. The most important engineering property of these soils is friction angle which represents the interlocking between soil particles and can be applied widely in designing and constructing these earth structures. Friction angle and dilative behavior of coarse-grained soils can be estimated from empirical correlations with in-situ testing and physical properties of the soil or measured directly in the laboratory performing direct shear or triaxial tests. Unfortunately, large-scale testing is difficult, challenging, and expensive and is not possible in most soil mechanic laboratories. So, it is common to remove the large particles and do the tests, which cannot be counted as an exact estimation of the parameters and behavior of the original soil. This paper describes a new methodology to simulate particles grading distribution of a well-graded gravel sample to a smaller scale sample as it can be tested in an ordinary direct shear apparatus to estimate the stress-strain behavior, friction angle, and dilative behavior of the original coarse-grained soil considering its confining pressure, and relative density using a machine learning method. A total number of 72 direct shear tests are performed in 6 different sizes, 3 different confining pressures, and 4 different relative densities. Multivariate Adaptive Regression Spline (MARS) technique was used to develop an equation in order to predict shear strength and dilative behavior based on the size distribution of coarse-grained soil particles. Also, an uncertainty analysis was performed in order to examine the reliability of the proposed equation.Keywords: MARS, coarse-grained soil, shear strength, uncertainty analysis
Procedia PDF Downloads 1602778 Syllogistic Reasoning with 108 Inference Rules While Case Quantities Change
Authors: Mikhail Zarechnev, Bora I. Kumova
Abstract:
A syllogism is a deductive inference scheme used to derive a conclusion from a set of premises. In a categorical syllogisms, there are only two premises and every premise and conclusion is given in form of a quantified relationship between two objects. The different order of objects in premises give classification known as figures. We have shown that the ordered combinations of 3 generalized quantifiers with certain figure provide in total of 108 syllogistic moods which can be considered as different inference rules. The classical syllogistic system allows to model human thought and reasoning with syllogistic structures always attracted the attention of cognitive scientists. Since automated reasoning is considered as part of learning subsystem of AI agents, syllogistic system can be applied for this approach. Another application of syllogistic system is related to inference mechanisms on the Semantic Web applications. In this paper we proposed the mathematical model and algorithm for syllogistic reasoning. Also the model of iterative syllogistic reasoning in case of continuous flows of incoming data based on case–based reasoning and possible applications of proposed system were discussed.Keywords: categorical syllogism, case-based reasoning, cognitive architecture, inference on the semantic web, syllogistic reasoning
Procedia PDF Downloads 4102777 Sensitivity Analysis of Oil Spills Modeling with ADIOS II for Iranian Fields in Persian Gulf
Authors: Farzingohar Mehrnaz, Yasemi Mehran, Esmaili Zinat, Baharlouian Maedeh
Abstract:
Aboozar (Ardeshir) and Bahregansar are the two important Iranian oilfields in Persian Gulf waters. The operation activities cause to create spills which impacted on the marine environment. Assumed spills are molded by ADIOS II (Automated Data Inquiry for Oil Spills) which is NOAA’s weathering oil software. Various atmospheric and marine data with different oil types are used for the modeling. Numerous scenarios for 100 bbls with mean daily air temperature and wind speed are input for 5 days. To find the model sensitivity in each setting, one parameter is changed, but the others stayed constant. In both fields, the evaporated and dispersed output values increased hence the remaining rate is reduced. The results clarified that wind speed first, second air temperature and finally oil type respectively were the most effective factors on the oil weathering process. The obtained results can help the emergency systems to predict the floating (dispersed and remained) volume spill in order to find the suitable cleanup tools and methods.Keywords: ADIOS, modeling, oil spill, sensitivity analysis
Procedia PDF Downloads 2982776 Local Boundary Analysis for Generative Theory of Tonal Music: From the Aspect of Classic Music Melody Analysis
Authors: Po-Chun Wang, Yan-Ru Lai, Sophia I. C. Lin, Alvin W. Y. Su
Abstract:
The Generative Theory of Tonal Music (GTTM) provides systematic approaches to recognizing local boundaries of music. The rules have been implemented in some automated melody segmentation algorithms. Besides, there are also deep learning methods with GTTM features applied to boundary detection tasks. However, these studies might face constraints such as a lack of or inconsistent label data. The GTTM database is currently the most widely used GTTM database, which includes manually labeled GTTM rules and local boundaries. Even so, we found some problems with these labels. They are sometimes discrepancies with GTTM rules. In addition, since it is labeled at different times by multiple musicians, they are not within the same scope in some cases. Therefore, in this paper, we examine this database with musicians from the aspect of classical music and relabel the scores. The relabeled database - GTTM Database v2.0 - will be released for academic research usage. Despite the experimental and statistical results showing that the relabeled database is more consistent, the improvement in boundary detection is not substantial. It seems that we need more clues than GTTM rules for boundary detection in the future.Keywords: dataset, GTTM, local boundary, neural network
Procedia PDF Downloads 1442775 Repositioning Nigerian University Libraries for Effective Information Provision and Delivery in This Age of Globalization
Authors: S. O. Uwaifo
Abstract:
The paper examines the pivotal role of the library in university education through the provision of a wide range of information materials (print and non- print) required for the teaching, learning and research activities of the university. However certain impediments to the effectiveness of Nigerian university libraries, such as financial constraints, high foreign exchange, global disparities in accessing the internet, lack of local area networks, erratic electric power supply, absence of ICT literacy, poor maintenance culture, etc., were identified. Also, the necessity of repositioning Nigerian university libraries for effective information provision and delivery was stressed by pointing out their dividends, such as users’ access to Directory of Open Access Journals (DOAJ), Online Public Access Catalogue (OPAC), Institutional Repositories, Electronic Document Delivery, Social Media Networks, etc. It therefore becomes necessary for the libraries to be repositioned by way of being adequately automated or digitized for effective service delivery, in this age of globalization. Based on the identified barriers by this paper, some recommendations were proffered.Keywords: repositioning, Nigerian university libraries, effective information provision and delivery, globalization
Procedia PDF Downloads 3242774 Minimization of the Abrasion Effect of Fiber Reinforced Polymer Matrix on Stainless Steel Injection Nozzle through the Application of Laser Hardening Technique
Authors: Amessalu Atenafu Gelaw, Nele Rath
Abstract:
Currently, laser hardening process is becoming among the most efficient and effective hardening technique due to its significant advantages. The source where heat is generated, the absence of cooling media, self-quenching property, less distortion nature due to localized heat input, environmental friendly behavior and less time to finish the operation are among the main benefits to adopt this technology. This day, a variety of injection machines are used in plastic, textile, electrical and mechanical industries. Due to the fast growing of composite technology, fiber reinforced polymer matrix becoming optional solution to use in these industries. Due, to the abrasion nature of fiber reinforced polymer matrix composite on the injection components, many parts are outdated before the design period. Niko, a company specialized in injection molded products, suffers from the short lifetime of the injection nozzles of the molds, due to the use of fiber reinforced and, therefore, more abrasive polymer matrix. To prolong the lifetime of these molds, hardening the susceptible component like the injecting nozzles was a must. In this paper, the laser hardening process is investigated on Unimax, a type of stainless steel. The investigation to get optimal results for the nozzle-case was performed in three steps. First, the optimal parameters for maximum possible hardenability for the investigated nozzle material is investigated on a flat sample, using experimental testing as well as thermal simulation. Next, the effect of an inclination on the maximum temperature is analyzed both by experimental testing and validation through simulation. Finally, the data combined and applied for the nozzle. This paper describes possible strategies and methods for laser hardening of the nozzle to reach hardness of at least 720 HV for the material investigated. It has been proven, that the nozzle can be laser hardened to over 900 HV with the option of even higher results when more precise positioning of the laser can be assured.Keywords: absorptivity, fiber reinforced matrix, laser hardening, Nd:YAG laser
Procedia PDF Downloads 1552773 The Role of Creative Works Dissemination Model in EU Copyright Law Modernization
Authors: Tomas Linas Šepetys
Abstract:
In online content-sharing service platforms, the ability of creators to restrict illicit use of audiovisual creative works has effectively been abolished, largely due to specific infrastructure where a huge volume of copyrighted audiovisual content can be made available to the public. The European Union legislator has attempted to strengthen the positions of creators in the realm of online content-sharing services. Article 17 of the new Digital Single Market Directive considers online content-sharing service providers to carry out acts of communication to the public of any creative content uploaded to their platforms by users and posits requirements to obtain licensing agreements. While such regulation intends to assert authors‘ ability to effectively control the dissemination of their creative works, it also creates threats of parody content overblocking through automated content monitoring. Such potentially paradoxical outcome of the efforts of the EU legislator to deliver economic safeguards for the creators in the online content-sharing service platforms leads to presume lack of informity on legislator‘s part regarding creative works‘ economic exploitation opportunities provided to creators in the online content-sharing infrastructure. Analysis conducted in this scientific research discloses that the aforementioned irregularities of parody and other creative content dissemination are caused by EU legislators‘ lack of assessment of value extraction conditions for parody creators in the online content-sharing service platforms. Historical and modeling research method application reveals the existence of two creative content dissemination models and their unique mechanisms of commercial value creation. Obligations to obtain licenses and liability over creative content uploaded to their platforms by users set in Article 17 of the Digital Single Market Directive represent technological replication of the proprietary dissemination model where the creator is able to restrict access to creative content apart from licensed retail channels. The online content-sharing service platforms represent an open dissemination model where the economic potential of creative content is based on the infrastructure of unrestricted access by users and partnership with advertising services offered by the platform. Balanced modeling of proprietary dissemination models in such infrastructure requires not only automated content monitoring measures but also additional regulatory monitoring solutions to separate parody and other types of creative content. An example of the Digital Single Market Directive proves that regulation can dictate not only the technological establishment of a proprietary dissemination model but also a partial reduction of the open dissemination model and cause a disbalance between the economic interests of creators relying on such models. The results of this scientific research conclude an informative role of the creative works dissemination model in the EU copyright law modernization process. A thorough understanding of the commercial prospects of the open dissemination model intrinsic to the online content-sharing service platform structure requires and encourages EU legislators to regulate safeguards for parody content dissemination. Implementing such safeguards would result in a common application of proprietary and open dissemination models in the online content-sharing service platforms and balanced protection of creators‘ economic interests explicitly based on those creative content dissemination models.Keywords: copyright law, creative works dissemination model, digital single market directive, online content-sharing services
Procedia PDF Downloads 742772 Privacy Label: An Alternative Approach to Present Privacy Policies from Online Services to the User
Authors: Diego Roberto Goncalves De Pontes, Sergio Donizetti Zorzo
Abstract:
Studies show that most users do not read privacy policies from the online services they use. Some authors claim that one of the main causes of this is that policies are long and usually hard to understand, which make users lose interest in reading them. In this scenario, users may agree with terms without knowing what kind of data is being collected and why. Given that, we aimed to develop a model that would present the privacy policies contents in an easy and graphical way for the user to understand. We call it the Privacy Label. Using information recovery techniques, we propose an architecture that is able to extract information about what kind of data is being collected and to what end in the policies and show it to the user in an automated way. To assess our model, we calculated the precision, recall and f-measure metrics on the information extracted by our technique. The results for each metric were 68.53%, 85.61% e 76,13%, respectively, making it possible for the final user to understand which data was being collected without reading the whole policy. Also, our proposal can facilitate the notice-and-choice by presenting privacy policy information in an alternative way for online users.Keywords: privacy, policies, user behavior, computer human interaction
Procedia PDF Downloads 3042771 Dynamic Test for Stability of Bar Loaded by a Compression Force Directed Towards the Pole
Authors: Elia Efraim, Boris Blostotsky
Abstract:
The phenomenon of buckling of structural elements under compression is revealed in many cases of loading and found consideration in many structures and mechanisms. In the present work the method and results of dynamic test for buckling of bar loaded by a compression force directed towards the pole are considered. Experimental determination of critical force for such system has not been made previously. The tested object is a bar with semi-rigid connection to the base at one of its ends, and with a hinge moving along a circle at the other. The test includes measuring the natural frequency of the bar at different values of compression load. The lateral stiffness is calculated based on natural frequency and reduced mass on the bar's movable end. The critical load is determined by extrapolation the values of lateral stiffness up to zero value. For the experimental investigation the special test-bed was created that allows the stability testing at positive and negative curvature of the movable end's trajectory, as well as varying the rotational stiffness of the other end connection. Decreasing a friction at the movable end allows extend the diapason of applied compression force. The testing method includes : - methodology of the experiment planning, that allows determine the required number of tests under various loads values in the defined range and the type of extrapolating function; - methodology of experimental determination of reduced mass at the bar's movable end including its own mass; - methodology of experimental determination of lateral stiffness of uncompressed bar rotational semi-rigid connection at the base. For planning the experiment and for comparison of the experimental results with the theoretical values of critical load, the analytical dependencies of lateral stiffness of the bar with defined end conditions on compression load. In the particular case of perfectly rigid connection of the bar to the base, the critical load value corresponds to solution by S.P. Timoshenko. Correspondence of the calculated and experimental values was obtained.Keywords: buckling, dynamic method, end-fixity factor, force directed towards a pole
Procedia PDF Downloads 3492770 Optimizing Usability Testing with Collaborative Method in an E-Commerce Ecosystem
Authors: Markandeya Kunchi
Abstract:
Usability testing (UT) is one of the vital steps in the User-centred design (UCD) process when designing a product. In an e-commerce ecosystem, UT becomes primary as new products, features, and services are launched very frequently. And, there are losses attached to the company if an unusable and inefficient product is put out to market and is rejected by customers. This paper tries to answer why UT is important in the product life-cycle of an E-commerce ecosystem. Secondary user research was conducted to find out work patterns, development methods, type of stakeholders, and technology constraints, etc. of a typical E-commerce company. Qualitative user interviews were conducted with product managers and designers to find out the structure, project planning, product management method and role of the design team in a mid-level company. The paper tries to address the usual apprehensions of the company to inculcate UT within the team. As well, it stresses upon factors like monetary resources, lack of usability expert, narrow timelines, and lack of understanding of higher management as some primary reasons. Outsourcing UT to vendors is also very prevalent with mid-level e-commerce companies, but it has its own severe repercussions like very little team involvement, huge cost, misinterpretation of the findings, elongated timelines, and lack of empathy towards the customer, etc. The shortfalls of the unavailability of a UT process in place within the team and conducting UT through vendors are bad user experiences for customers while interacting with the product, badly designed products which are neither useful and nor utilitarian. As a result, companies see dipping conversions rates in apps and websites, huge bounce rates and increased uninstall rates. Thus, there was a need for a more lean UT system in place which could solve all these issues for the company. This paper highlights on optimizing the UT process with a collaborative method. The degree of optimization and structure of collaborative method is the highlight of this paper. Collaborative method of UT is one in which the centralised design team of the company takes for conducting and analysing the UT. The UT is usually a formative kind where designers take findings into account and uses in the ideation process. The success of collaborative method of UT is due to its ability to sync with the product management method employed by the company or team. The collaborative methods focus on engaging various teams (design, marketing, product, administration, IT, etc.) each with its own defined roles and responsibility in conducting a smooth UT with users In-house. The paper finally highlights the positive results of collaborative UT method after conducting more than 100 In-lab interviews with users across the different lines of businesses. Some of which are the improvement of interaction between stakeholders and the design team, empathy towards users, improved design iteration, better sanity check of design solutions, optimization of time and money, effective and efficient design solution. The future scope of collaborative UT is to make this method leaner, by reducing the number of days to complete the entire project starting from planning between teams to publishing the UT report.Keywords: collaborative method, e-commerce, product management method, usability testing
Procedia PDF Downloads 1182769 A Tool Tuning Approximation Method: Exploration of the System Dynamics and Its Impact on Milling Stability When Amending Tool Stickout
Authors: Nikolai Bertelsen, Robert A. Alphinas, Klaus B. Orskov
Abstract:
The shortest possible tool stickout has been the traditional go-to approach with expectations of increased stability and productivity. However, experimental studies at Danish Advanced Manufacturing Research Center (DAMRC) have proven that for some tool stickout lengths, there exist local productivity optimums when utilizing the Stability Lobe Diagrams for chatter avoidance. This contradicts with traditional logic and the best practices taught to machinists. This paper explores the vibrational characteristics and behaviour of a milling system over the tool stickout length. The experimental investigation has been conducted by tap testing multiple endmills where the tool stickout length has been varied. For each length, the modal parameters have been recorded and mapped to visualize behavioural tendencies. Furthermore, the paper explores the correlation between the modal parameters and the Stability Lobe Diagram to outline the influence and importance of each parameter in a multi-mode system. The insights are conceptualized into a tool tuning approximation solution. It builds on an almost linear change in the natural frequencies when amending tool stickout, which results in changed positions of the Chatter-free Stability Lobes. Furthermore, if the natural frequency of two modes become too close, it will onset of the dynamic absorber effect phenomenon. This phenomenon increases the critical stable depth of cut, allowing for a more stable milling process. Validation tests on the tool tuning approximation solution have shown varying success of the solution. This outlines the need for further research on the boundary conditions of the solution to understand at which conditions the tool tuning approximation solution is applicable. If the conditions get defined, the conceptualized tool tuning approximation solution outlines an approach for quick and roughly approximating tool stickouts with the potential for increased stiffness and optimized productivity.Keywords: milling, modal parameters, stability lobes, tap testing, tool tuning
Procedia PDF Downloads 1552768 Non-Conformance Clearance through an Intensified Mentorship towards ISO 15189 Accreditation: The Case of Jimma and Hawassa Hospital Microbiology Laboratories, Ethiopia
Authors: Dawit Assefa, Kassaye Tekie, Gebrie Alebachew, Degefu Beyene, Bikila Alemu, Naji Mohammed, Asnakech Agegnehu, Seble Tsehay, Geremew Tasew
Abstract:
Background: Implementation of a Laboratory Quality Management System (LQMS) is critical to ensure accurate, reliable, and efficient laboratory testing of antimicrobial resistance (AMR). However, limited LQMS implementation and progress toward accreditation in the AMR surveillance laboratory testing setting exist in Ethiopia. By addressing non-conformances (NCs) and working towards accreditation, microbiology laboratories can improve the quality of their services, increase staff competence, and contribute to mitigate the spread of AMR. Methods: Using standard ISO 15189 horizontal and vertical assessment checklists, certified assessors identified NCs at Hawassa and Jimma Hospital microbiology laboratories. The Ethiopian Public Health Institute AMR mentors and IDDS staff prioritized closing the NCs through the implementation of an intensified mentorship program that included ISO 15189 orientation training, resource allocation, and action plan development. Results: For the two facilities to clear their NCs, an intensified mentorship approach was adopted by providing ISO 15189 orientation training, provision of buffer reagents, controls, standards, and axillary equipment, and facilitating equipment maintenance and calibration. Method verification and competency assessment were also conducted along with the implementation of standard operating procedures and recommended corrective actions. This approach enhanced the laboratory's readiness for accreditation. After addressing their NCs, the two laboratories applied to Ethiopian Accreditation Services for ISO 15189 accreditation. Conclusions: Clearing NCs through the implementation of intensified mentorship was crucial in preparing the two laboratories for accreditation and improving quality laboratory test results. This approach can guide other microbiology laboratories’ accreditation attainment efforts.Keywords: non-conformance clearance, intensified mentorship, accreditation, ISO 15189
Procedia PDF Downloads 892767 Monitoring of Educational Achievements of Kazakhstani 4th and 9th Graders
Authors: Madina Tynybayeva, Sanya Zhumazhanova, Saltanat Kozhakhmetova, Merey Mussabayeva
Abstract:
One of the leading indicators of the education quality is the level of students’ educational achievements. The processes of modernization of Kazakhstani education system have predetermined the need to improve the national system by assessing the quality of education. The results of assessment greatly contribute to addressing questions about the current state of the educational system in the country. The monitoring of students’ educational achievements (MEAS) is the systematic measurement of the quality of education for compliance with the state obligatory standard of Kazakhstan. This systematic measurement is independent of educational organizations and approved by the order of the Minister of Education and Scienceof Kazakhstan. The MEAS was conducted in the regions of Kazakhstanfor the first time in 2022 by the National Testing Centre. The measurement does not have legal consequences either for students or for educational organizations. Students’ achievements were measured in three subject areas: reading, mathematics and science literacy. MEAS was held for the first time in April this year, 105 thousand students from 1436 schools of Kazakhstan took part in the testing. The monitoring was accompanied by a survey of students, teachers, and school leaders. The goal is to identify which contextual factors affect learning outcomes. The testing was carried out in a computer format. The test tasks of MEAS are ranked according to the three levels of difficulty: basic, medium, and high. Fourth graders are asked to complete 30 closed-type tasks. The average score of the results is 21 points out of 30, which means 70% of tasks were successfully completed. The total number of test tasks for 9th grade students – 75 questions. The results of ninth graders are comparatively lower, the success rate of completing tasks is 63%. MEAS participants did not reveal a statistically significant gap in results in terms of the language of instruction, territorial status, and type of school. The trend of reducing the gap in these indicators is also noted in the framework of recent international studies conducted across the country, in particular PISA for schools in Kazakhstan. However, there is a regional gap in MOES performance. The difference in the values of the indicators of the highest and lowest scores of the regions was 11% of the success of completing tasks in the 4th grade, 14% in the 9thgrade. The results of the 4th grade students in reading, mathematics, and science literacy are: 71.5%, 70%, and 66.9%, respectively. The results of ninth-graders in reading, mathematics, and science literacy are 69.6%, 54%, and 60.8%, respectively. From the surveys, it was revealed that the educational achievements of students are considerably influenced by such factors as the subject competences of teachers, as well as the school climate and motivation of students. Thus, the results of MEAS indicate the need for an integrated approach to improving the quality of education. In particular, the combination of improving the content of curricula and textbooks, internal and external assessment of the educational achievements of students, educational programs of pedagogical specialties, and advanced training courses is required.Keywords: assessment, secondary school, monitoring, functional literacy, kazakhstan
Procedia PDF Downloads 1042766 A Bayesian Approach for Analyzing Academic Article Structure
Authors: Jia-Lien Hsu, Chiung-Wen Chang
Abstract:
Research articles may follow a simple and succinct structure of organizational patterns, called move. For example, considering extended abstracts, we observe that an extended abstract usually consists of five moves, including Background, Aim, Method, Results, and Conclusion. As another example, when publishing articles in PubMed, authors are encouraged to provide a structured abstract, which is an abstract with distinct and labeled sections (e.g., Introduction, Methods, Results, Discussions) for rapid comprehension. This paper introduces a method for computational analysis of move structures (i.e., Background-Purpose-Method-Result-Conclusion) in abstracts and introductions of research documents, instead of manually time-consuming and labor-intensive analysis process. In our approach, sentences in a given abstract and introduction are automatically analyzed and labeled with a specific move (i.e., B-P-M-R-C in this paper) to reveal various rhetorical status. As a result, it is expected that the automatic analytical tool for move structures will facilitate non-native speakers or novice writers to be aware of appropriate move structures and internalize relevant knowledge to improve their writing. In this paper, we propose a Bayesian approach to determine move tags for research articles. The approach consists of two phases, training phase and testing phase. In the training phase, we build a Bayesian model based on a couple of given initial patterns and the corpus, a subset of CiteSeerX. In the beginning, the priori probability of Bayesian model solely relies on initial patterns. Subsequently, with respect to the corpus, we process each document one by one: extract features, determine tags, and update the Bayesian model iteratively. In the testing phase, we compare our results with tags which are manually assigned by the experts. In our experiments, the promising accuracy of the proposed approach reaches 56%.Keywords: academic English writing, assisted writing, move tag analysis, Bayesian approach
Procedia PDF Downloads 3302765 Upper Bound of the Generalized P-Value for the Difference between Two Future Population Means
Authors: Rada Somkhuean, Sa-aat Niwitpong, Suparat Niwitpong
Abstract:
This paper presents the generalized p-values for testing the difference between two future population means when the variances are unknown, in both cases for when the variances are equal and unequal. We also derive a closed form expression of the upper bound of the proposed generalized p-value.Keywords: generalized p-value, two future population means, upper bound, variances
Procedia PDF Downloads 3832764 IoT Based Monitoring Temperature and Humidity
Authors: Jay P. Sipani, Riki H. Patel, Trushit Upadhyaya
Abstract:
Today there is a demand to monitor environmental factors almost in all research institutes and industries and even for domestic uses. The analog data measurement requires manual effort to note readings, and there may be a possibility of human error. Such type of systems fails to provide and store precise values of parameters with high accuracy. Analog systems are having drawback of storage/memory. Therefore, there is a requirement of a smart system which is fully automated, accurate and capable enough to monitor all the environmental parameters with utmost possible accuracy. Besides, it should be cost-effective as well as portable too. This paper represents the Wireless Sensor (WS) data communication using DHT11, Arduino, SIM900A GSM module, a mobile device and Liquid Crystal Display (LCD). Experimental setup includes the heating arrangement of DHT11 and transmission of its data using Arduino and SIM900A GSM shield. The mobile device receives the data using Arduino, GSM shield and displays it on LCD too. Heating arrangement is used to heat and cool the temperature sensor to study its characteristics.Keywords: wireless communication, Arduino, DHT11, LCD, SIM900A GSM module, mobile phone SMS
Procedia PDF Downloads 2812763 The Forensic Swing of Things: The Current Legal and Technical Challenges of IoT Forensics
Authors: Pantaleon Lutta, Mohamed Sedky, Mohamed Hassan
Abstract:
The inability of organizations to put in place management control measures for Internet of Things (IoT) complexities persists to be a risk concern. Policy makers have been left to scamper in finding measures to combat these security and privacy concerns. IoT forensics is a cumbersome process as there is no standardization of the IoT products, no or limited historical data are stored on the devices. This paper highlights why IoT forensics is a unique adventure and brought out the legal challenges encountered in the investigation process. A quadrant model is presented to study the conflicting aspects in IoT forensics. The model analyses the effectiveness of forensic investigation process versus the admissibility of the evidence integrity; taking into account the user privacy and the providers’ compliance with the laws and regulations. Our analysis concludes that a semi-automated forensic process using machine learning, could eliminate the human factor from the profiling and surveillance processes, and hence resolves the issues of data protection (privacy and confidentiality).Keywords: cloud forensics, data protection Laws, GDPR, IoT forensics, machine Learning
Procedia PDF Downloads 1492762 Durability of Light-Weight Concrete
Authors: Rudolf Hela, Michala Hubertova
Abstract:
The paper focuses on research of durability and lifetime of dense light-weight concrete with artificial light-weight aggregate Liapor exposed to various types of aggressive environment. Experimental part describes testing of designed concrete of various strength classes and volume weights exposed to cyclical freezing, frost and chemical de-icers and various types of chemically aggressive environment.Keywords: aggressive environment, durability, physical-mechanical properties, light-weight concrete
Procedia PDF Downloads 2662761 Preparation Nanocapsules of Chitosan Modified With Selenium Extracted From the Lactobacillus Acidophilus and Their Anticancer Properties
Authors: Akbar Esmaeili, Mahnoosh Aliahmadi
Abstract:
This study synthesized a modified imaging of gallium@deferoxamine/folic acid/chitosan/polyaniline/polyvinyl alcohol (Ga@DFA/FA/CS/PANI/PVA). It contains Morus nigra extract by selenium nanoparticles prepared from Lactobacillus acidophilus. Using the impregnation method, Se nanoparticles were then deposited on (Ga@DFA/FA/ CS/PANI/PVA). The modified contrast agents were mixed with M. nigra extract, and investigated their antibacterial activities by applying to L929 cell lines. The influence of variable factors, including 1. surfactant, 2. solvent, 3. aqueous phase, 4. pH, 5. buffer, 6. minimum Inhibitory concentration (MIC), 7. minimum bactericidal concentration (MBC), 8. cytotoxicity on cancer cells., 9. antibiotic, 10. antibiogram, 11. release and loading, 12. the emotional effect, 13. the concentration of nanoparticles, 14. olive oil, and 15. they have investigated thermotical methods. The structure and morphology of the synthesized contrast agents were characterized by zeta potential sizer analysis (ZPS), X-Ray diffraction (XRD), Fourier-transform infrared (FT-IR), energy dispersive X-ray (EDX), ultraviolet–visible (UV–Vis) spectra, and scanning electron microscope (SEM). The experimental section was conducted and monitored by response surface methods (RSM), MTT, MIC, MBC, and cancer cytotoxic conversion assay. Antibiogram testing of NCs on Pseudomonas aeruginosa bacteria was successful and obtained MIC = 2 factors with less harmful effect. All experimental sections confirmed that our synthesized particles have potent antioxidant properties. Antibiogram testing revealed that NPS could kill P. aeruginosa and P. aeruginosa. A variety of synthetic conditions were done by diffusion emulsion method by varying parameters, the optimum state of DFA release Ga@DFA/FA/CS/PANI/PVA NPs (6 ml) with pH = 5.5, time = 3 h, NCs and DFA (3 mg), and achieved buffer (20 ml). DFA in Ga@DFA/FA/ CS/PANI/PVA was released and showed an absorption peak at 378 nm by applying a 300-rpm magnetic rate. In this report, Ga decreased the harmful effect on the human body.Keywords: nanocapsules, technolgy, biology, nano
Procedia PDF Downloads 392760 Data Mining Meets Educational Analysis: Opportunities and Challenges for Research
Authors: Carla Silva
Abstract:
Recent development of information and communication technology enables us to acquire, collect, analyse data in various fields of socioeconomic – technological systems. Along with the increase of economic globalization and the evolution of information technology, data mining has become an important approach for economic data analysis. As a result, there has been a critical need for automated approaches to effective and efficient usage of massive amount of educational data, in order to support institutions to a strategic planning and investment decision-making. In this article, we will address data from several different perspectives and define the applied data to sciences. Many believe that 'big data' will transform business, government, and other aspects of the economy. We discuss how new data may impact educational policy and educational research. Large scale administrative data sets and proprietary private sector data can greatly improve the way we measure, track, and describe educational activity and educational impact. We also consider whether the big data predictive modeling tools that have emerged in statistics and computer science may prove useful in educational and furthermore in economics. Finally, we highlight a number of challenges and opportunities for future research.Keywords: data mining, research analysis, investment decision-making, educational research
Procedia PDF Downloads 3562759 Shared Decision Making in Oropharyngeal Cancer: The Development of a Decision Aid for Resectable Oropharyngeal Carcinoma, a Mixed Methods Study
Authors: Anne N. Heirman, Lisette van der Molen, Richard Dirven, Gyorgi B. Halmos, Michiel W.M. van den Brekel
Abstract:
Background: Due to the rising incidence of oropharyngeal squamous cell cancer (OPSCC), many patients are challenged with choosing between transoral(robotic) surgery and radiotherapy, with equal survival and oncological outcomes. Also, functional outcomes are of little difference over the years. With this study, the wants and needs of patients and caregivers are identified to develop a comprehensible patient decision aid (PDA). Methods: The development of this PDA is based on the International Patient Decision Aid Standards criteria. In phase 1, relevant literature was reviewed and compared to current counseling papers. We interviewed ten post-treatment patients and ten doctors from four head and neck centers in the Netherlands, which were transcribed verbatim and analyzed. With these results, the first draft of the PDA was developed. Phase 2 beholds testing the first draft for comprehensibility and usability. Phase 3 beholds testing for feasibility. After this phase, the final version of the PDA was developed. Results: All doctors and patients agreed a PDA was needed. Phase 1 showed that 50% of patients felt well-informed after standard care and 35% missed information about treatment possibilities. Side effects and functional outcomes were rated as the most important for decision-making. With this information, the first version was developed. Doctors and patients stated (phase 2) that they were satisfied with the comprehensibility and usability, but there was too much text. The PDA underwent text reduction revisions and got more graphics. After revisions, all doctors found the PDA feasible and would contribute to regular counseling. Patients were satisfied with the results and wished they would have seen it before their treatment. Conclusion: Decision-making for OPSCC should focus on differences in side-effects and functional outcomes. Patients and doctors found the PDA to be of great value. Future research will explore the benefits of the PDA in clinical practice.Keywords: head-and-neck oncology, oropharyngeal cancer, patient decision aid, development, shared decision making
Procedia PDF Downloads 141