Search results for: degree of operating leverage (DOL)
4901 Cash Flow Position and Corporate Performance: A Study of Selected Manufacturing Companies in Nigeria
Authors: Uzoma Emmanuel Igboji
Abstract:
The study investigates the effects of cash flow position on corporate performance in the manufacturing sector of Nigeria, using multiple regression techniques. The study involved a survey of five (5) manufacturing companies quoted on the Nigerian Stock Exchange. The data were obtained from the annual reports of the selected companies under study. The result shows that operating and financing cash flow have a significant positive relationship with corporate performance, while investing cash flow position have a significant negative relationship. The researcher recommended that the regulatory authorities should encourage external auditors of these quoted companies to use cash flow ratios in evaluating the performance of a company before expressing an independent opinion on the financial statement. The will give detailed financial information to existing and potential investors to make informed economic decisions.Keywords: cash flow, financing, performance, operating
Procedia PDF Downloads 3154900 Primal Instinct: Formation of Food Aversion
Authors: Zihuan (Dylan) Wang
Abstract:
This paper analyzes the formation of human food aversion from a biological perspective. It points out that this biased behavior is formed through the accumulation of long-term survival and life experiences. By introducing the "Food Chain Energy Pyramid" model and the analogous deduction of the "Human Food Aversion Pyramid," with energy conversion efficiency as the primary reason, it analyzes the underlying reasons for the formation of food preferences. Food industry professionals can gain inspiration from this article to combine the theory presented with their expertise in order to leverage product quality and promote environmentally conscious practices.Keywords: food aversion, food preference, energy conversion efficiency, food and culture, nutrition, research and development
Procedia PDF Downloads 594899 Cartel Formation with Differentiated Products, Asymmetric Cost, and Quantity Competition: The Case of Three Firms
Authors: Burkhard Hehenkamp, Tahmina Faizi
Abstract:
In this paper, we analyze the formation of cartels along with the stability of the cartel for the case of three firms that produce differentiated products and differ in their cost of production. Both cost and demand are linear, and firms compete in quantities once a cartel has been formed (or not). It turns out that the degree of product differentiation has a direct effect on the incentive to form a cartel. Firstly, when goods are complements or close substitutes, firms form a grand coalition. Secondly, for weak and medium substitutes, the firm with the lowest cost prefers to remain independent, while both other firms form a coalition. We also find that the producer profit of the stable coalition structure is nonmonotonic in the degree of product differentiation.Keywords: collusion, cartel formation, cartel stability, differentiated market, quantity competition, oligopolies
Procedia PDF Downloads 964898 An Optimized Method for Calculating the Linear and Nonlinear Response of SDOF System Subjected to an Arbitrary Base Excitation
Authors: Hossein Kabir, Mojtaba Sadeghi
Abstract:
Finding the linear and nonlinear responses of a typical single-degree-of-freedom system (SDOF) is always being regarded as a time-consuming process. This study attempts to provide modifications in the renowned Newmark method in order to make it more time efficient than it used to be and make it more accurate by modifying the system in its own non-linear state. The efficacy of the presented method is demonstrated by assigning three base excitations such as Tabas 1978, El Centro 1940, and MEXICO CITY/SCT 1985 earthquakes to a SDOF system, that is, SDOF, to compute the strength reduction factor, yield pseudo acceleration, and ductility factor.Keywords: single-degree-of-freedom system (SDOF), linear acceleration method, nonlinear excited system, equivalent displacement method, equivalent energy method
Procedia PDF Downloads 3204897 Modeling Aerosol Formation in an Electrically Heated Tobacco Product
Authors: Markus Nordlund, Arkadiusz K. Kuczaj
Abstract:
Philip Morris International (PMI) is developing a range of novel tobacco products with the potential to reduce individual risk and population harm in comparison to smoking cigarettes. One of these products is the Tobacco Heating System 2.2 (THS 2.2), (named as the Electrically Heated Tobacco System (EHTS) in this paper), already commercialized in a number of countries (e.g., Japan, Italy, Switzerland, Russia, Portugal and Romania). During use, the patented EHTS heats a specifically designed tobacco product (Electrically Heated Tobacco Product (EHTP)) when inserted into a Holder (heating device). The EHTP contains tobacco material in the form of a porous plug that undergoes a controlled heating process to release chemical compounds into vapors, from which an aerosol is formed during cooling. The aim of this work was to investigate the aerosol formation characteristics for realistic operating conditions of the EHTS as well as for relevant gas mixture compositions measured in the EHTP aerosol consisting mostly of water, glycerol and nicotine, but also other compounds at much lower concentrations. The nucleation process taking place in the EHTP during use when operated in the Holder has therefore been modeled numerically using an extended Classical Nucleation Theory (CNT) for multicomponent gas mixtures. Results from the performed simulations demonstrate that aerosol droplets are formed only in the presence of an aerosol former being mainly glycerol. Minor compounds in the gas mixture were not able to reach a supersaturated state alone and therefore could not generate aerosol droplets from the multicomponent gas mixture at the operating conditions simulated. For the analytically characterized aerosol composition and estimated operating conditions of the EHTS and EHTP, glycerol was shown to be the main aerosol former triggering the nucleation process in the EHTP. This implies that according to the CNT, an aerosol former, such as glycerol needs to be present in the gas mixture for an aerosol to form under the tested operating conditions. To assess if these conclusions are sensitive to the initial amount of the minor compounds and to include and represent the total mass of the aerosol collected during the analytical aerosol characterization, simulations were carried out with initial masses of the minor compounds increased by as much as a factor of 500. Despite this extreme condition, no aerosol droplets were generated when glycerol, nicotine and water were treated as inert species and therefore not actively contributing to the nucleation process. This implies that according to the CNT, an aerosol cannot be generated without the help of an aerosol former, from the multicomponent gas mixtures at the compositions and operating conditions estimated for the EHTP, even if all minor compounds are released or generated in a single puff.Keywords: aerosol, classical nucleation theory (CNT), electrically heated tobacco product (EHTP), electrically heated tobacco system (EHTS), modeling, multicomponent, nucleation
Procedia PDF Downloads 2774896 Assessing the Values and Destruction Degree of Archaeological Sites in Taiwan
Authors: Yung-Chung Chuang
Abstract:
Current situation and accumulated development of archaeological sites have very high impacts on the preservation value of the site. This research set 3 archaeological sites in Taiwan as study areas. Assessment of the degree of destruction of cultural layers due to land use change and geomorphological change were conducted with aerial photographs (1976-1978; 2016-2017) and digital aerial survey technology on 2D and 3D geographic information system platforms. The results showed that the archaeological sites were all seriously influenced due to the high land use intensity between 1976-2017. Geomorphological changes caused by human cultivation and engineering construction were main causes of site destruction, especially in private lands. Therefore, urban planning methods for land acquisition or land regulation are necessary.Keywords: archaeological sites, accumulated development, destruction of cultural layers, geomorphological changes
Procedia PDF Downloads 2084895 Non-Coplanar Nuclei in Heavy-Ion Reactions
Authors: Sahila Chopra, Hemdeep, Arshdeep Kaur, Raj K. Gupta
Abstract:
In recent times, we noticed an interesting and important role of non-coplanar degree-of-freedom (Φ = 00) in heavy ion reactions. Using the dynamical cluster-decay model (DCM) with Φ degree-of-freedom included, we have studied three compound systems 246Bk∗, 164Yb∗ and 105Ag∗. Here, within the DCM with pocket formula for nuclear proximity potential, we look for the effects of including compact, non-coplanar configurations (Φc = 00) on the non-compound nucleus (nCN) contribution in total fusion cross section σfus. For 246Bk∗, formed in 11B+235U and 14N+232Th reaction channels, the DCM with coplanar nuclei (Φc = 00) shows an nCN contribution for 11B+235U channel, but none for 14N+232Th channel, which on including Φ gives both reaction channels as pure compound nucleus decays. In the case of 164Yb∗, formed in 64Ni+100Mo, the small nCN effects for Φ=00 are reduced to almost zero for Φ = 00. Interestingly, however, 105Ag∗ for Φ = 00 shows a small nCN contribution, which gets strongly enhanced for Φ = 00, such that the characteristic property of PCN presents a change of behaviour, like that of a strongly fissioning superheavy element to a weakly fissioning nucleus; note that 105Ag∗ is a weakly fissioning nucleus and Psurv behaves like one for a weakly fissioning nucleus for both Φ = 00 and Φ = 00. Apparently, Φ is presenting itself like a good degree-of-freedom in the DCM.Keywords: dynamical cluster-decay model, fusion cross sections, non-compound nucleus effects, non-coplanarity
Procedia PDF Downloads 3024894 Radio-Frequency Technologies for Sensing and Imaging
Authors: Cam Nguyen
Abstract:
Rapid, accurate, and safe sensing and imaging of physical quantities or structures finds many applications and is of significant interest to society. Sensing and imaging using radio-frequency (RF) techniques, particularly, has gone through significant development and subsequently established itself as a unique territory in the sensing world. RF sensing and imaging has played a critical role in providing us many sensing and imaging abilities beyond our human capabilities, benefiting both civilian and military applications - for example, from sensing abnormal conditions underneath some structures’ surfaces to detection and classification of concealed items, hidden activities, and buried objects. We present the developments of several sensing and imaging systems implementing RF technologies like ultra-wide band (UWB), synthetic-pulse, and interferometry. These systems are fabricated completely using RF integrated circuits. The UWB impulse system operates over multiple pulse durations from 450 to 1170 ps with 5.5-GHz RF bandwidth. It performs well through tests of various samples, demonstrating its usefulness for subsurface sensing. The synthetic-pulse system operating from 0.6 to 5.6 GHz can assess accurately subsurface structures. The synthetic-pulse system operating from 29.72-37.7 GHz demonstrates abilities for various surface and near-surface sensing such as profile mapping, liquid-level monitoring, and anti-personnel mine locating. The interferometric system operating at 35.6 GHz demonstrates its multi-functional capability for measurement of displacements and slow velocities. These RF sensors are attractive and useful for various surface and subsurface sensing applications. This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.Keywords: RF sensors, radars, surface sensing, subsurface sensing
Procedia PDF Downloads 3164893 Effects of Training on Self-Efficacy, Competence, and Target Complaints of Dementia Family Support Program Facilitators
Authors: Myonghwa Park, Eun Jeong Choi
Abstract:
Persons with dementia living at home have complex caregiving demands, which can be significant sources of stress for the family caregivers. Thus, the dementia family support program facilitators struggle to provide various health and social services, facing diverse challenges. The purpose of this study was to research the effects of training program for the dementia family support program facilitators on self-efficacy, competence, and target complaints concerning operating their program. We created a training program with systematic contents, which was composed of 10 sessions and we provided the program for the facilitators. The participants were 32 people at 28 community dementia support centers who manage dementia family support programs and they completed quantitative and qualitative self-report questionnaire before and after participating in the training program. For analyzing the data, descriptive statistics were used and with a paired t-test, pretest and posttest scores of self-efficacy, competence, and target complaints were analyzed. We used Statistical Package for the Social Sciences (SPSS) statistics (Version 21) to analyze the data. The average age of the participants was 39.6 years old and the 84.4% of participants were nurses. There were statistically meaningful increases in facilitators’ self-efficacy scores (t = -4.45, p < .001) and competence scores (t = -2.133, p = 0.041) after participating in training program and operating their own dementia family support program. Also, the facilitators’ difficulties in conducting their dementia family support program were decreased which was assessed with target complaints. Especially, the facilitators’ lack of dementia expertise and experience was decreased statistically significantly (t = 3.520, p = 0.002). Findings provided evidence of the benefits of the training program for facilitators to enhance managing dementia family support program by improving the facilitators’ self-efficacy and competence and decreasing their difficulties regarding operating their program.Keywords: competence, dementia, facilitator, family, self-efficacy, training
Procedia PDF Downloads 2124892 Introducing a Proper Total Quality Management Model for Libraries
Authors: Alireza Shahraki, Kaveh Keshmiry Zadeh
Abstract:
Total quality management in libraries is of particular importance because high-quality libraries can facilitate the sustained development process in countries. This study has been conducted to examine the feasibility of implementation of total quality management in libraries of Sistan and Baluchestan and to provide an appropriate model for this concern. All of the officials and employees of Sistan and Baluchestan libraries (23 individuals) constitute the population of the study. Data gathering tool is a questionnaire that is designated based on ISO9000. The data extracted from questionnaires were analyzed using SPSS software. Results indicate that the highest degree of conformance to the 8 principles of ISO9000 is attributed to the principle of 'users' (69.9%) and the lowest degree is associated with 'decision making based on facts' (39.1%). Moreover, a significant relationship was observed among the items (1 and 3), (2 and 5), (2 and 7), (3 and 5), (4 and 5), (4 and 7), (4 and 8), (5 and 7), and (7 and 8). According to the research findings, it can generally be said that it is not eligible now to utilize TQM in libraries of Sistan and Baluchestan.Keywords: quality management, total quality, university libraries, libraries management
Procedia PDF Downloads 3404891 Performance Comparison of Joint Diagonalization Structure (JDS) Method and Wideband MUSIC Method
Authors: Sandeep Santosh, O. P. Sahu
Abstract:
We simulate an efficient multiple wideband and nonstationary source localization algorithm by exploiting both the non-stationarity of the signals and the array geometric information.This algorithm is based on joint diagonalization structure (JDS) of a set of short time power spectrum matrices at different time instants of each frequency bin. JDS can be used for quick and accurate multiple non-stationary source localization. The JDS algorithm is a one stage process i.e it directly searches the Direction of arrivals (DOAs) over the continuous location parameter space. The JDS method requires that the number of sensors is not less than the number of sources. By observing the simulation results, one can conclude that the JDS method can localize two sources when their difference is not less than 7 degree but the Wideband MUSIC is able to localize two sources for difference of 18 degree.Keywords: joint diagonalization structure (JDS), wideband direction of arrival (DOA), wideband MUSIC
Procedia PDF Downloads 4684890 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment
Authors: U. Yerlikaya, R. T. Balkan
Abstract:
In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.Keywords: A* algorithm, autonomous turrets, high-dimensional C-space, manifold C-space, point clouds
Procedia PDF Downloads 1394889 Speed Optimization Model for Reducing Fuel Consumption Based on Shipping Log Data
Authors: Ayudhia P. Gusti, Semin
Abstract:
It is known that total operating cost of a vessel is dominated by the cost of fuel consumption. How to reduce the fuel cost of ship so that the operational costs of fuel can be minimized is the question that arises. As the basis of these kinds of problem, sailing speed determination is an important factor to be considered by a shipping company. Optimal speed determination will give a significant influence on the route and berth schedule of ships, which also affect vessel operating costs. The purpose of this paper is to clarify some important issues about ship speed optimization. Sailing speed, displacement, sailing time, and specific fuel consumption were obtained from shipping log data to be further analyzed for modeling the speed optimization. The presented speed optimization model is expected to affect the fuel consumption and to reduce the cost of fuel consumption.Keywords: maritime transportation, reducing fuel, shipping log data, speed optimization
Procedia PDF Downloads 5684888 An Online Master's Degree Program for the Preparation of Adapted Physical Education Teachers for Children with Significant Developmental Disabilities
Authors: Jiabei Zhang
Abstract:
Online programs developed for preparing qualified teachers have significantly increased over the years in the United States of America (USA). However, no online graduate programs for training adapted physical education (APE) teachers for children with significant developmental disabilities are currently available in the USA. The purpose of this study was to develop an online master’s degree program for the preparation of APE teachers to serve children with significant developmental disabilities. The characteristics demonstrated by children with significant developmental disabilities, the competencies required for certified APE teachers, and the evidence-based positive behavioral interventions (PBI) documented for teaching children with significant developmental disabilities were fully reviewed in this study. An online graduate program with 14 courses for 42 credit hours (3 credit hours per course) was then developed for training APE teachers to serve children with significant developmental disabilities. Included in this online program are five components: (a) 2 capstone courses, (b) 4 APE courses, (c) 4 PBI course, (d) 2 elective courses, and (e) 2 capstone courses. All courses will be delivered online through Desire2Learn administered by the Extended University Programs at Western Michigan University (WMU). An applicant who has a bachelor’s degree in physical education or special education is eligible for this proposed program. A student enrolled in this program is expected to complete all courses in 2.5 years while staying in their local area. This program will be submitted to the WMU curriculum committee for approval in the fall of 2018.Keywords: adapted physical education, online program, teacher preparation, and significant disabilities
Procedia PDF Downloads 1484887 Bit Error Rate Monitoring for Automatic Bias Control of Quadrature Amplitude Modulators
Authors: Naji Ali Albakay, Abdulrahman Alothaim, Isa Barshushi
Abstract:
The most common quadrature amplitude modulator (QAM) applies two Mach-Zehnder Modulators (MZM) and one phase shifter to generate high order modulation format. The bias of MZM changes over time due to temperature, vibration, and aging factors. The change in the biasing causes distortion to the generated QAM signal which leads to deterioration of bit error rate (BER) performance. Therefore, it is critical to be able to lock MZM’s Q point to the required operating point for good performance. We propose a technique for automatic bias control (ABC) of QAM transmitter using BER measurements and gradient descent optimization algorithm. The proposed technique is attractive because it uses the pertinent metric, BER, which compensates for bias drifting independently from other system variations such as laser source output power. The proposed scheme performance and its operating principles are simulated using OptiSystem simulation software for 4-QAM and 16-QAM transmitters.Keywords: automatic bias control, optical fiber communication, optical modulation, optical devices
Procedia PDF Downloads 1894886 Corrosion Behavior of Steels in Molten Salt Reactors
Authors: Jana Rejková, Marie Kudrnová
Abstract:
This paper deals with the research of materials for one of the types of reactors IV. generation - reactor with molten salts. One of the advantages of molten salts applied as a coolant in reactors is the ability to operate at relatively low pressures, as opposed to cooling with water or gases. Compared to liquid metal cooling, which also allows lower operating pressures, salt melts are less prone to chemical reactions. The service life of the construction materials used is limited by the operating temperatures of the reactor and the content of impurities in the salts. For the research of corrosion resistance, an experimental device was designed and assembled, enabling exposure at high temperatures without access to oxygen in a flowing atmosphere of inert gas. Nickel alloys Inconel 601, 617, and 625 were tested in a mixture of chloride salts LiCl – KCl (58,2 - 41,8 wt. %). The experiment showed high resistance of the materials used and based on the results and XPS analysis, other construction materials were proposed for the experiments.Keywords: molten salt, corrosion, nuclear reactor, nickel alloy
Procedia PDF Downloads 1644885 Tribological Investigation of Piston Ring Liner Assembly
Authors: Bharatkumar Sutaria, Tejaskumar Chaudhari
Abstract:
An engine performance can be increased by minimizing losses. There are various losses observed in the engines. i.e. thermal loss, heat loss and mechanical losses. Mechanical losses are in the tune of 15 to 20 % of the overall losses. Piston ring assembly contributes the highest friction in the mechanical frictional losses. The variation of piston speed in stroke length the friction force development is not uniform. In present work, comparison has been made between theoretical and experimental friction force under different operating conditions. The experiments are performed using variable operating parameters such as load, speed, temperature and lubricants. It is found that reducing trend of friction force and friction coefficient is in good nature with mixed lubrication regime of the Stribeck curve. Overall outcome from the laboratory test performance of segmented piston ring assembly using multi-grade oil offers reasonably good results at room and elevated temperatures.Keywords: friction force, friction coefficient, piston rings, Stribeck curve
Procedia PDF Downloads 4864884 Energy Performance of Buildings Due to Downscaled Seasonal Models
Authors: Anastasia K. Eleftheriadou, Athanasios Sfetsos, Nikolaos Gounaris
Abstract:
The present work examines the suitability of a seasonal forecasting model downscaled with a very high spatial resolution in order to assess the energy performance and requirements of buildings. The application of the developed model is applied on Greece for a period and with a forecast horizon of 5 months in the future. Greece, as a country in the middle of a financial crisis and facing serious societal challenges, is also very sensitive to climate changes. The commonly used method for the correlation of climate change with the buildings energy consumption is the concept of Degree Days (DD). This method can be applied to heating and cooling systems for a better management of environmental, economic and energy crisis, and can be used as medium (3-6 months) planning tools in order to predict the building needs and country’s requirements for residential energy use.Keywords: downscaled seasonal models, degree days, energy performance
Procedia PDF Downloads 4534883 Evaluation of Transfer Capability Considering Uncertainties of System Operating Condition and System Cascading Collapse
Authors: Nur Ashida Salim, Muhammad Murtadha Othman, Ismail Musirin, Mohd Salleh Serwan
Abstract:
Over the past few decades, the power system industry in many developing and developed countries has gone through a restructuring process of the industry where they are moving towards a deregulated power industry. This situation will lead to competition among the generation and distribution companies to achieve a certain objective which is to provide quality and efficient production of electric energy, which will reduce the price of electricity. Therefore it is important to obtain an accurate value of the Available Transfer Capability (ATC) and Transmission Reliability Margin (TRM) in order to ensure the effective power transfer between areas during the occurrence of uncertainties in the system. In this paper, the TRM and ATC is determined by taking into consideration the uncertainties of the system operating condition and system cascading collapse by applying the bootstrap technique. A case study of the IEEE RTS-79 is employed to verify the robustness of the technique proposed in the determination of TRM and ATC.Keywords: available transfer capability, bootstrap technique, cascading collapse, transmission reliability margin
Procedia PDF Downloads 4084882 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem
Authors: Y. Wang
Abstract:
The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.Keywords: frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem
Procedia PDF Downloads 2334881 Enhanced Analysis of Spatial Morphological Cognitive Traits in Lidukou Village through the Application of Space Syntax
Authors: Man Guo
Abstract:
This paper delves into the intricate interplay between spatial morphology and spatial cognition in Lidukou Village, utilizing a combined approach of spatial syntax and field data. Through a comparative analysis of the gathered data, it emerges that the spatial integration level of Lidukou Village exhibits a direct positive correlation with the spatial cognitive preferences of its inhabitants. Specifically, the areas within the village that exhibit a higher degree of spatial cognition are predominantly distributed along the axis primarily defined by Shuxiang Road. However, the accessibility to historical relics remains limited, lacking a coherent systemic relationship. To address the morphological challenges faced by Lidukou Village, this study proposes optimization strategies that encompass diverse perspectives, including the refinement of spatial mechanisms and the shaping of strategic spatial nodes.Keywords: traditional villages, spatial syntax, spatial integration degree, morphological problem
Procedia PDF Downloads 434880 Intersubjectivity of Forensic Handwriting Analysis
Authors: Marta Nawrocka
Abstract:
In each of the legal proceedings, in which expert evidence is carried out, a major concern is the assessment of the evidential value of expert reports. Judicial institutions, while making decisions, rely heavily on the expert reports, because they usually do not possess 'special knowledge' from a certain fields of science which makes it impossible for them to verify the results presented in the processes. In handwriting studies, the standards of analysis are developed. They unify procedures used by experts in comparing signs and in constructing expert reports. However, the methods used by experts are usually of a qualitative nature. They rely on the application of knowledge and experience of expert and in effect give significant range of margin in the assessment. Moreover, the standards used by experts are still not very precise and the process of reaching the conclusions is poorly understood. The above-mentioned circumstances indicate that expert opinions in the field of handwriting analysis, for many reasons, may not be sufficiently reliable. It is assumed that this state of affairs has its source in a very low level of intersubjectivity of measuring scales and analysis procedures, which consist elements of this kind of analysis. Intersubjectivity is a feature of cognition which (in relation to methods) indicates the degree of consistency of results that different people receive using the same method. The higher the level of intersubjectivity is, the more reliable and credible the method can be considered. The aim of the conducted research was to determine the degree of intersubjectivity of the methods used by the experts from the scope of handwriting analysis. 30 experts took part in the study and each of them received two signatures, with varying degrees of readability, for analysis. Their task was to distinguish graphic characteristics in the signature, estimate the evidential value of the found characteristics and estimate the evidential value of the signature. The obtained results were compared with each other using the Alpha Krippendorff’s statistic, which numerically determines the degree of compatibility of the results (assessments) that different people receive under the same conditions using the same method. The estimation of the degree of compatibility of the experts' results for each of these tasks allowed to determine the degree of intersubjectivity of the studied method. The study showed that during the analysis, the experts identified different signature characteristics and attributed different evidential value to them. In this scope, intersubjectivity turned out to be low. In addition, it turned out that experts in various ways called and described the same characteristics, and the language used was often inconsistent and imprecise. Thus, significant differences have been noted on the basis of language and applied nomenclature. On the other hand, experts attributed a similar evidential value to the entire signature (set of characteristics), which indicates that in this range, they were relatively consistent.Keywords: forensic sciences experts, handwriting analysis, inter-rater reliability, reliability of methods
Procedia PDF Downloads 1494879 Complex Network Analysis of Seismicity and Applications to Short-Term Earthquake Forecasting
Authors: Kahlil Fredrick Cui, Marissa Pastor
Abstract:
Earthquakes are complex phenomena, exhibiting complex correlations in space, time, and magnitude. Recently, the concept of complex networks has been used to shed light on the statistical and dynamical characteristics of regional seismicity. In this work, we study the relationships and interactions of seismic regions in Chile, Japan, and the Philippines through weighted and directed complex network analysis. Geographical areas are digitized into cells of fixed dimensions which in turn become the nodes of the network when an earthquake has occurred therein. Nodes are linked if a correlation exists between them as determined and measured by a correlation metric. The networks are found to be scale-free, exhibiting power-law behavior in the distributions of their different centrality measures: the in- and out-degree and the in- and out-strength. The evidence is also found of preferential interaction between seismically active regions through their degree-degree correlations suggesting that seismicity is dictated by the activity of a few active regions. The importance of a seismic region to the overall seismicity is measured using a generalized centrality metric taken to be an indicator of its activity or passivity. The spatial distribution of earthquake activity indicates the areas where strong earthquakes have occurred in the past while the passivity distribution points toward the likely locations an earthquake would occur whenever another one happens elsewhere. Finally, we propose a method that would project the location of the next possible earthquake using the generalized centralities coupled with correlations calculated between the latest earthquakes and a geographical point in the future.Keywords: complex networks, correlations, earthquake, hazard assessment
Procedia PDF Downloads 2124878 Going Horizontal: Confronting the Challenges When Transitioning to Cloud
Authors: Harvey Hyman, Thomas Hull
Abstract:
As one of the largest cancer treatment centers in the United States, we continuously confront the challenge of how to leverage the best possible technological solutions, in order to provide the highest quality of service to our customers – the doctors, nurses and patients at Moffitt who are fighting every day for the prevention and cure of cancer. This paper reports on the transition from a vertical to a horizontal IT infrastructure. We discuss how the new frameworks and methods such as public, private and hybrid cloud, brokering cloud services are replacing the traditional vertical paradigm for computing. We also report on the impact of containers, micro services, and the shift to continuous integration/continuous delivery. These impacts and changes in delivery methodology for computing are driving how we accomplish our strategic IT goals across the enterprise.Keywords: cloud computing, IT infrastructure, IT architecture, healthcare
Procedia PDF Downloads 3804877 Knowledge Management Barriers: A Statistical Study of Hardware Development Engineering Teams within Restricted Environments
Authors: Nicholas S. Norbert Jr., John E. Bischoff, Christopher J. Willy
Abstract:
Knowledge Management (KM) is globally recognized as a crucial element in securing competitive advantage through building and maintaining organizational memory, codifying and protecting intellectual capital and business intelligence, and providing mechanisms for collaboration and innovation. KM frameworks and approaches have been developed and defined identifying critical success factors for conducting KM within numerous industries ranging from scientific to business, and for ranges of organization scales from small groups to large enterprises. However, engineering and technical teams operating within restricted environments are subject to unique barriers and KM challenges which cannot be directly treated using the approaches and tools prescribed for other industries. This research identifies barriers in conducting KM within Hardware Development Engineering (HDE) teams and statistically compares significance to barriers upholding the four KM pillars of organization, technology, leadership, and learning for HDE teams. HDE teams suffer from restrictions in knowledge sharing (KS) due to classification of information (national security risks), customer proprietary restrictions (non-disclosure agreement execution for designs), types of knowledge, complexity of knowledge to be shared, and knowledge seeker expertise. As KM evolved leveraging information technology (IT) and web-based tools and approaches from Web 1.0 to Enterprise 2.0, KM may also seek to leverage emergent tools and analytics including expert locators and hybrid recommender systems to enable KS across barriers of the technical teams. The research will test hypothesis statistically evaluating if KM barriers for HDE teams affect the general set of expected benefits of a KM System identified through previous research. If correlations may be identified, then generalizations of success factors and approaches may also be garnered for HDE teams. Expert elicitation will be conducted using a questionnaire hosted on the internet and delivered to a panel of experts including engineering managers, principal and lead engineers, senior systems engineers, and knowledge management experts. The feedback to the questionnaire will be processed using analysis of variance (ANOVA) to identify and rank statistically significant barriers of HDE teams within the four KM pillars. Subsequently, KM approaches will be recommended for upholding the KM pillars within restricted environments of HDE teams.Keywords: engineering management, knowledge barriers, knowledge management, knowledge sharing
Procedia PDF Downloads 2794876 Nursing Workers’ Capacity of Resilience at a Psychiatric Hospital in Brazil
Authors: Cheila Cristina Leonardo Oliveira Gaioli, Fernanda Ludmilla Rossi Rocha, Sandra Cristina Pillon
Abstract:
Resilience is a psychological process that facilitates the maintenance of health, developed in response to numerous existing stressors in daily life. Furthermore, resilience can be described as the ability which allows an individual or group to hold up well before unfavorable situations. This study aimed to identify nursing workers’ resilience at a psychiatric hospital in Brazil. This is an exploratory research with quantitative data approach. The sample consisted of 56 workers, using the Resilience Scale. Of the 56 subjects, 45 (80.4%) were women; 22 (39.2%) were 20- to 40-years-old and 30 (53.6%) were 41- to 60-years-old; 11 (19.6%) were nurses and 45 (80.4%) were technicians or nursing assistants. The results also showed that 50% of subjects showed a high resilience degree and 42.9% an average resilience degree. Thus, it was found that workers seek to develop protective factors in coping with a work environment that does not value the individual subjectivity and does not allow professional development, discouraging workers.Keywords: health promotion, nursing, occupational health, resilience
Procedia PDF Downloads 5184875 Enhancement of Material Removal Rate of Complex Featured Surfaces in Vibratory Finishing
Authors: Kunal Ahluwalia, Ampara Aramcharoen, Chan Wai Luen, Swee Hock Yeo
Abstract:
The different process engineering applications of vibratory finishing technology have led to its versatile use in the development of aviation components. The most noteworthy applications of vibratory finishing include deburring and imparting the required surface finish. In this paper, vibratory finishing has been used to study its effectiveness in removal of laser shock peened (LSP) layers from Titanium workpieces. A vibratory trough operating at a frequency of 25 Hz, amplitude 3.5 mm and titanium specimens (Ti-6Al-4V, Grade 5) of dimensions 50 x 50 x 10 mm³ were utilized for the experiments. A vibrating fixture operating at 200 Hz was used to provide vibration to the test piece and was immersed in the vibratory trough. It was evident that there is an increase in efficiency of removal of the complex featured layer and smoother surface finish with the introduction of the vibrating fixture in the vibratory finishing setup as compared to the conventional vibratory finishing setup wherein the fixture is not vibrating.Keywords: laser shock peening, material removal, surface roughness, vibrating fixture, vibratory finishing
Procedia PDF Downloads 2224874 Equilibrium Modeling of a Two Stage Downdraft Gasifier Using Different Gasification Fluids
Authors: F. R. M. Nascimento, E. E. S. Lora, J. C. E. Palácio
Abstract:
A mathematical model to investigate the performance of a two stage fixed bed downdraft gasifier operating with air, steam and oxygen mixtures as the gasifying fluid has been developed. The various conditions of mixtures for a double stage fluid entry, have been performed. The model has been validated through a series of experimental tests performed by NEST – The Excellence Group in Thermal and Distributed Generation of the Federal University of Itajubá. Influence of mixtures are analyzed through the Steam to Biomass (SB), Equivalence Ratio (ER) and the Oxygen Concentration (OP) parameters in order to predict the best operating conditions to obtain adequate output gas quality, once is a key parameter for subsequent gas processing in the synthesis of biofuels, heat and electricity generation. Results show that there is an optimal combination in the steam and oxygen content of the gasifying fluid which allows the user find the best conditions to design and operate the equipment according to the desired application.Keywords: air, equilibrium, downdraft, fixed bed gasification, mathematical modeling, mixtures, oxygen steam
Procedia PDF Downloads 4814873 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups
Authors: Sakshi Bhalla
Abstract:
On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.Keywords: communication, emoji, language, Twitter
Procedia PDF Downloads 954872 The Relationship between Human Neutrophil Elastase Levels and Acute Respiratory Distress Syndrome in Patients with Thoracic Trauma
Authors: Wahyu Purnama Putra, Artono Isharanto
Abstract:
Thoracic trauma is trauma that hits the thoracic wall or intrathoracic organs, either due to blunt trauma or sharp trauma. Thoracic trauma often causes impaired ventilation-perfusion due to damage to the lung parenchyma. This results in impaired tissue oxygenation, which is one of the causes of acute respiratory distress syndrome (ARDS). These changes are caused by the release of pro-inflammatory mediators, plasmatic proteins, and proteases into the alveolar space associated with ongoing edema, as well as oxidative products that ultimately result in severe inhibition of the surfactant system. This study aims to predict the incidence of acute respiratory distress syndrome (ARDS) through human neutrophil elastase levels. This study examines the relationship between plasma elastase levels as a predictor of the incidence of ARDS in thoracic trauma patients in Malang. This study is an observational cohort study. Data analysis uses the Pearson correlation test and ROC curve (receiver operating characteristic curve). It can be concluded that there is a significant (p= 0.000, r= -0.988) relationship between elastase levels and BGA-3. If the value of elastase levels is limited to 23.79 ± 3.95, the patient will experience mild ARDS. While if the value of elastase levels is limited to 57.68 ± 18.55, in the future, the patient will experience moderate ARDS. Meanwhile, if the elastase level is between 107.85 ± 5.04, the patient will likely experience severe ARDS. Neutrophil elastase levels correlate with the degree of severity of ARDS incidence.Keywords: ARDS, human neutrophil elastase, severity, thoracic trauma
Procedia PDF Downloads 148