Search results for: point of sale (pos)
4258 Turkish Validation of the Nursing Outcomes for Urinary Incontinence and Their Sensitivities on Nursing Interventions
Authors: Dercan Gencbas, Hatice Bebis, Sue Moorhead
Abstract:
In the nursing process, many of the nursing classification systems were created to be used in international. From these, NANDA-I, Nursing Outcomes Classification (NOC) and Nursing Interventions Classification (NIC). In this direction, the main objective of this study is to establish a model for caregivers in hospitals and communities in Turkey and to ensure that nursing outputs are assessed by NOC-based measures. There are many scales to measure Urinary Incontinence (UI), which is very common in children, in old age, vaginal birth, NOC scales are ideal for use in the nursing process for comprehensive and holistic assessment, with surveys available. For this reason, the purpose of this study is to evaluate the validity of the NOC outputs and indicators used for UI NANDA-I. This research is a methodological study. In addition to the validity of scale indicators in the study, how much they will contribute to recovery after the nursing intervention was assessed by experts. Scope validations have been applied and calculated according to Fehring 1987 work model. According to this, nursing inclusion criteria and scores were determined. For example, if experts have at least four years of clinical experience, their score was 4 points or have at least one year of the nursing classification system, their score was 1 point. The experts were a publication experience about nursing classification, their score was 1 point, or have a doctoral degree in nursing, their score was 2 points. If the expert has a master degree, their score was 1 point. Total of 55 experts rated Fehring as a “senior degree” with a score of 90 according to the expert scoring. The nursing interventions to be applied were asked to what extent these indicators would contribute to recovery. For coverage validity tailored to Fehring's model, each NOC and NOC indicator from specialists was asked to score between 1-5. Score for the significance of indicators was from 1=no precaution to 5=very important. After the expert opinion, these weighted scores obtained for each NOC and NOC indicator were classified as 0.8 critical, 0.8 > 0.5 complements, > 0.5 are excluded. In the NANDA-I / NOC / NIC system (guideline), 5 NOCs proposed for nursing diagnoses for UI were proposed. These outputs are; Urinary Continence, Urinary Elimination, Tissue Integrity, Self CareToileting, Medication Response. After the scales are translated into Turkish, the weighted average of the scores obtained from specialists for the coverage of all 5 NOCs and the contribution of nursing initiatives exceeded 0.8. After the opinions of the experts, 79 of the 82 indicators were calculated as critical, 3 of the indicators were calculated as supplemental. Because of 0.5 > was not obtained, no substance was removed. All NOC outputs were identified as valid and usable scales in Turkey. In this study, five NOC outcomes were verified for the evaluation of the output of individuals who have received nursing knowledge of UI and variant types. Nurses in Turkey can benefit from the outputs of the NOC scale to perform the care of the elderly incontinence.Keywords: nursing outcomes, content validity, nursing diagnosis, urinary incontinence
Procedia PDF Downloads 1244257 Performance Evaluation of the CareSTART S1 Analyzer for Quantitative Point-Of-Care Measurement of Glucose-6-Phosphate Dehydrogenase Activity
Authors: Haiyoung Jung, Mi Joung Leem, Sun Hwa Lee
Abstract:
Background & Objective: Glucose-6-phosphate dehydrogenase (G6PD) deficiency is a genetic abnormality that results in an inadequate amount of G6PD, leading to increased susceptibility of red blood cells to reactive oxygen species and hemolysis. The present study aimed to evaluate the careSTARTTM S1 analyzer for measuring G6PD activity to hemoglobin (Hb) ratio. Methods: Precision for G6PD activity and hemoglobin measurement was evaluated using control materials with two levels on five repeated runs per day for five days. The analytic performance of the careSTARTTM S1 analyzer was compared with spectrophotometry in 40 patient samples. Reference ranges suggested by the manufacturer were validated in 20 healthy males and females each. Results: The careSTARTTM S1 analyzer demonstrated precision of 6.0% for low-level (14~45 U/dL) and 2.7% for high-level (60~90 U/dL) control in G6PD activity, and 1.4% in hemoglobin (7.9~16.3 u/g Hb). A comparison study of G6PD to Hb ratio between the careSTARTTM S1 analyzer and spectrophotometry showed an average difference of 29.1% with a positive bias of the careSTARTTM S1 analyzer. All normal samples from the healthy population were validated for the suggested reference range for males (≥2.19 U/g Hb) and females (≥5.83 U/g Hb). Conclusion: The careSTARTTM S1 analyzer demonstrated good analytical performance and can replace the current spectrophotometric measurement of G6PD enzyme activity. In the aspect of the management of clinical laboratories, it can be a reasonable option as a point-of-care analyzer with minimal handling of samples and reagents, in addition to the automatic calculation of the ratio of measured G6PD activity and Hb concentration, to minimize any clerical errors involved with manual calculation.Keywords: POCT, G6PD, performance evaluation, careSTART
Procedia PDF Downloads 614256 The Effects of Production, Transportation and Storage Conditions on Mold Growth in Compound Feeds
Authors: N. Cetinkaya
Abstract:
The objective of the present study is to determine the critical control points during the production, transportation and storage conditions of compound feeds to be used in the Hazard Analysis Critical Control Point (HACCP) feed safety management system. A total of 40 feed samples were taken after 20 and 40 days of storage periods from the 10 dairy and 10 beef cattle farms following the transportation of the compound feeds from the factory. In addition, before transporting the feeds from factory immediately after production of dairy and beef cattle compound feeds, 10 from each total 20 samples were taken as 0 day. In all feed samples, chemical composition and total aflatoxin levels were determined. The aflatoxin levels in all feed samples with the exception of 2 dairy cattle feeds were below the maximum acceptable level. With the increase in storage period in dairy feeds, the aflatoxin levels were increased to 4.96 ppb only in a BS8 dairy farm. This value is below the maximum permissible level (10 ppb) in beef cattle feed. The aflatoxin levels of dairy feed samples taken after production varied between 0.44 and 2.01 ppb. Aflatoxin levels were found to be between 0.89 and 3.01 ppb in dairy cattle feeds taken on the 20th day of storage at 10 dairy cattle farm. On the 40th day, feed aflatoxin levels in the same dairy cattle farm were found between 1.12 and 7.83 ppb. The aflatoxin levels were increased to 7.83 and 6.31 ppb in 2 dairy farms, after a storage period of 40 days. These obtained aflatoxin values are above the maximum permissible level in dairy cattle feeds. The 40 days storage in pellet form in the HACCP feed safety management system can be considered as a critical control point.Keywords: aflatoxin, beef cattle feed, compound feed, dairy cattle feed, HACCP
Procedia PDF Downloads 3954255 Circle of Learning Using High-Fidelity Simulators Promoting a Better Understanding of Resident Physicians on Point-of-Care Ultrasound in Emergency Medicine
Authors: Takamitsu Kodama, Eiji Kawamoto
Abstract:
Introduction: Ultrasound in emergency room has advantages of safer, faster, repeatable and noninvasive. Especially focused Point-Of-Care Ultrasound (POCUS) is used daily for prompt and accurate diagnoses, for quickly identifying critical and life-threatening conditions. That is why ultrasound has demonstrated its usefulness in emergency medicine. The true value of ultrasound has been once again recognized in recent years. It is thought that all resident physicians working at emergency room should perform an ultrasound scan to interpret signs and symptoms of deteriorating patients in the emergency room. However, a practical education on ultrasound is still in development. To resolve this issue, we established a new educational program using high-fidelity simulators and evaluated the efficacy of this course. Methods: Educational program includes didactic lectures and skill stations in half-day course. Instructor gives a lecture on POCUS such as Rapid Ultrasound in Shock (RUSH) and/or Focused Assessment Transthoracic Echo (FATE) protocol at the beginning of the course. Then, attendees are provided for training of scanning with cooperation of normal simulated patients. In the end, attendees learn how to apply focused POCUS skills at clinical situation using high-fidelity simulators such as SonoSim® (SonoSim, Inc) and SimMan® 3G (Laerdal Medical). Evaluation was conducted through surveillance questionnaires to 19 attendees after two pilot courses. The questionnaires were focused on understanding course concept and satisfaction. Results: All attendees answered the questionnaires. With respect to the degree of understanding, 12 attendees (number of valid responses: 13) scored four or more points out of five points. High-fidelity simulators, especially SonoSim® was highly appreciated to enhance learning how to handle ultrasound at an actual practice site by 11 attendees (number of valid responses: 12). All attendees encouraged colleagues to take this course because the high level of satisfaction was achieved. Discussion: Newly introduced educational course using high-fidelity simulators realizes the circle of learning to deepen the understanding on focused POCUS by gradual stages. SonoSim® can faithfully reproduce scan images with pathologic findings of ultrasound and provide experimental learning for a growth number of beginners such as resident physicians. In addition, valuable education can be provided if it is used combined with SimMan® 3G. Conclusions: Newly introduced educational course using high-fidelity simulators is supposed to be effective and helps in providing better education compared with conventional courses for emergency physicians.Keywords: point-of-care ultrasound, high-fidelity simulators, education, circle of learning
Procedia PDF Downloads 2804254 Organized Crime-A Social Challenge for Kosovo towards European Union Integration
Authors: Samedin Mehmeti
Abstract:
Very tens political and economic situation, in particular armed conflicts that followed at the time of the destruction of the former Yugoslavia, influenced migrations and displacement of population. Especially setting international sanctions and embargo influenced the creation of organized criminal groups. A lot of members of the former Yugoslav security apparatus in collaboration with ordinary criminal groups engaged in: smuggling of goods, petroleum and arms, sale and transport of drugs, payable murder, damage to public property, kidnappings, extortion, racketeering, etc. This tradition of criminality, of course in other forms and with other methods, has continued after conflicts and continues with a high intensity even in nowadays. One of the most delicate problems of organized crime activity is the impact on the economic sphere, where organized crime opposes and severely damages national security and economy to criminalize it in certain sectors and directions. Organized crime groups including who find Kosovo as a place to develop their criminal activities are characterized by: loyalty of many people especially through family connections and kinship in carrying out criminal activities and the existence of powerful hierarchy of leadership which in many cases include the corrupt officials of state apparatus. Groups have clear hierarchy and flexible structure of command, each member within the criminal group knows his duties concrete. According to statistics presented in police reports its notable that Kosovo has a large number of cases of organized crime, cultivation, trafficking and possession of narcotics. As already is very well known that one of the primary conditions that must be fulfilled on track toward integration in the European Union is precisely to prevent and combat organized crime. Kosovo has serious problems with prosecutorial and judicial system. But the misuse of public funds, even those coming directly from EU budget or the budget of the European Union member states, have a negative impact on this process. The economic crisis that has gripped some of the EU countries has led to the creation of an environment in which there are far fewer resources and opportunities to invest in preventing and combating organized crime within member states. This automatically reduces the level of financial support for other countries in the fight against organized crime. Kosovo as a poor country, now has less likely benefiting from the support tools that will be eventually offered by Europe set of in this area.Keywords: police, european integration, organized crime, narcotics
Procedia PDF Downloads 4384253 Marketing and Pharmaceutical Analysis of Medical Cosmetics in Bulgaria and Japan
Authors: V. Petkova, V. Valchanova, D. Grekova, K. Andreevska, S. T. Geurguiev, V. Madgarov, D. Grekov
Abstract:
Introduction: Production, distribution and sale of cosmetics is a global industry, which played a key role in the European Union (EU), the US and Japan. A major participant EU whose market cosmetics is greater than in the US and 2 times greater than that in Japan. The output value of the cosmetics industry in the EU is estimated at about € 35 billion in 2001. Nearly 5 billion cosmetic products (number of packages) are sold annually in the EU, and the main markets are France, Germany, Italy, Spain and the UK. The aim of the study is legal and marketing analysis of cosmetic products dispensed in a pharmacy. Materials and methodology: Historical legislative analysis - the method is applied in the analysis of changes in the legislative regulation of the activities of cosmetic products in Japan and Bulgaria Comparative legislative analysis - the method is applied when comparing the legislative requirements for cosmetic products in the already mentioned countries. Both methods are applied to the following regulations: 1) Japanese Pharmaceuticals Affairs Law, Tokyo, Japan, Ministry of Health, Labour and Welfare; 2) Law on Medicinal Products for Human Use; effective from 3.01.2014. Results: The legislative framework for cosmetic products in Bulgaria and Japan is close and generally includes general guidelines: Definition of a medicinal product; Categorization of drugs (with differences in sub-categories); Pre-registration and marketing approval of the competent authorities; Compulsory compliance with gmp (unlike cosmetics); Regulatory focus on product quality, efficacy and safety; Obligations for labeling of such products; Created systems Pharmacovigilance and commitment of all parties - industry and health professionals; The main similarities in the regulation of products classified as cosmetics are in the following segments: Full producer responsibility for product safety; Surveillance of market regulatory authorities; No need for pre-registration or pre-marketing approval (a basic requirement for notification); Without restrictions on sales channels; GMP manuals for cosmetics; Regulatory focus on product safety (than over efficiency); General requirements in labeling: The main differences in the regulation of products classified as cosmetics are in the following segments: Details in the regulation of cosmetic products; Future convergence of regulatory frameworks can contribute to the removal of barriers to trade, to encourage innovation, while simultaneously ensuring a high level of protection of consumer safety.Keywords: cosmetics, legislation, comparative analysis, Bulgaria, Japan
Procedia PDF Downloads 5914252 Estimation of Energy Losses of Photovoltaic Systems in France Using Real Monitoring Data
Authors: Mohamed Amhal, Jose Sayritupac
Abstract:
Photovoltaic (PV) systems have risen as one of the modern renewable energy sources that are used in wide ranges to produce electricity and deliver it to the electrical grid. In parallel, monitoring systems have been deployed as a key element to track the energy production and to forecast the total production for the next days. The reliability of the PV energy production has become a crucial point in the analysis of PV systems. A deeper understanding of each phenomenon that causes a gain or a loss of energy is needed to better design, operate and maintain the PV systems. This work analyzes the current losses distribution in PV systems starting from the available solar energy, going through the DC side and AC side, to the delivery point. Most of the phenomena linked to energy losses and gains are considered and modeled, based on real time monitoring data and datasheets of the PV system components. An analysis of the order of magnitude of each loss is compared to the current literature and commercial software. To date, the analysis of PV systems performance based on a breakdown structure of energy losses and gains is not covered enough in the literature, except in some software where the concept is very common. The cutting-edge of the current analysis is the implementation of software tools for energy losses estimation in PV systems based on several energy losses definitions and estimation technics. The developed tools have been validated and tested on some PV plants in France, which are operating for years. Among the major findings of the current study: First, PV plants in France show very low rates of soiling and aging. Second, the distribution of other losses is comparable to the literature. Third, all losses reported are correlated to operational and environmental conditions. For future work, an extended analysis on further PV plants in France and abroad will be performed.Keywords: energy gains, energy losses, losses distribution, monitoring, photovoltaic, photovoltaic systems
Procedia PDF Downloads 1744251 Optimizing Residential Housing Renovation Strategies at Territorial Scale: A Data Driven Approach and Insights from the French Context
Authors: Rit M., Girard R., Villot J., Thorel M.
Abstract:
In a scenario of extensive residential housing renovation, stakeholders need models that support decision-making through a deep understanding of the existing building stock and accurate energy demand simulations. To address this need, we have modified an optimization model using open data that enables the study of renovation strategies at both territorial and national scales. This approach provides (1) a definition of a strategy to simplify decision trees from theoretical combinations, (2) input to decision makers on real-world renovation constraints, (3) more reliable identification of energy-saving measures (changes in technology or behaviour), and (4) discrepancies between currently planned and actually achieved strategies. The main contribution of the studies described in this document is the geographic scale: all residential buildings in the areas of interest were modeled and simulated using national data (geometries and attributes). These buildings were then renovated, when necessary, in accordance with the environmental objectives, taking into account the constraints applicable to each territory (number of renovations per year) or at the national level (renovation of thermal deficiencies (Energy Performance Certificates F&G)). This differs from traditional approaches that focus only on a few buildings or archetypes. This model can also be used to analyze the evolution of a building stock as a whole, as it can take into account both the construction of new buildings and their demolition or sale. Using specific case studies of French territories, this paper highlights a significant discrepancy between the strategies currently advocated by decision-makers and those proposed by our optimization model. This discrepancy is particularly evident in critical metrics such as the relationship between the number of renovations per year and achievable climate targets or the financial support currently available to households and the remaining costs. In addition, users are free to seek optimizations for their building stock across a range of different metrics (e.g., financial, energy, environmental, or life cycle analysis). These results are a clear call to re-evaluate existing renovation strategies and take a more nuanced and customized approach. As the climate crisis moves inexorably forward, harnessing the potential of advanced technologies and data-driven methodologies is imperative.Keywords: residential housing renovation, MILP, energy demand simulations, data-driven methodology
Procedia PDF Downloads 674250 Influence of Internal Topologies on Components Produced by Selective Laser Melting: Numerical Analysis
Authors: C. Malça, P. Gonçalves, N. Alves, A. Mateus
Abstract:
Regardless of the manufacturing process used, subtractive or additive, material, purpose and application, produced components are conventionally solid mass with more or less complex shape depending on the production technology selected. Aspects such as reducing the weight of components, associated with the low volume of material required and the almost non-existent material waste, speed and flexibility of production and, primarily, a high mechanical strength combined with high structural performance, are competitive advantages in any industrial sector, from automotive, molds, aviation, aerospace, construction, pharmaceuticals, medicine and more recently in human tissue engineering. Such features, properties and functionalities are attained in metal components produced using the additive technique of Rapid Prototyping from metal powders commonly known as Selective Laser Melting (SLM), with optimized internal topologies and varying densities. In order to produce components with high strength and high structural and functional performance, regardless of the type of application, three different internal topologies were developed and analyzed using numerical computational tools. The developed topologies were numerically submitted to mechanical compression and four point bending testing. Finite Element Analysis results demonstrate how different internal topologies can contribute to improve mechanical properties, even with a high degree of porosity relatively to fully dense components. Results are very promising not only from the point of view of mechanical resistance, but especially through the achievement of considerable variation in density without loss of structural and functional high performance.Keywords: additive manufacturing, internal topologies, porosity, rapid prototyping, selective laser melting
Procedia PDF Downloads 3294249 First Step into a Smoke-Free Life: The Effectivity of Peer Education Programme of Midwifery Students
Authors: Rabia Genc, Aysun Eksioglu, Emine Serap Sarican, Sibel Icke
Abstract:
Today the habit of cigarette smoking is among one of the most important public health concerns because of the health problems it leads to. The most important and hazardous group to use tobacco and tobacco products is adolescents and teenagers. And one of the most effective ways to prevent them from starting to smoke is education. This research is a kind of educational intervention study which was carried out in order to evaluate the effect of peer education on the teenagers' knowledge about smoking. The research was carried out between October 15, 2013 and September 9, 2015 at Ege University Ataturk Vocational Health School. The population of the research comprised of the students that have been studying at Ege University Atatürk Vocational Health School, Midwifery Department (N=390). The peer educator group that would give training on smoking consisted of 10 people, and the peer groups that would be trained were divided into two groups via simple randomization as experimental group (n=185) and control group (n=185). Questionnaire, information evaluation form, and informed consent forms were used as date collection tools. The analysis of the data which were collected in the study was carried out on Statistical Package for Social Science (SPSS 15.0). It was found out that 62.5 % of the students who were in peer educator group had smoked in some period of their lives; however, none of them continued to smoke. When they were asked about their reasons to start smoking, 25% said they just wanted to try it, and 25% of them answered that it was because of their friend groups. When the pre-peer education and post-peer education point averages of peer educator group were evaluated, the results showed that there was a significant difference between the point averages (p < 0.05). When the cigarette use of experimental group and the control group were evaluated, it was clear that 18.2% of the experimental group and 24.2%of the control group still smokes. 9.1% of the experimental group and 14.8% of control group stated that they started smoking because of their friend groups. Among the students who smoke 15.9% of the ones who belongs to the experimental group and 21.9% of the ones who belong to the control group stated they are thinking of quitting smoking. It was clear that there is a significant difference between the pre-education and post-education point averages of experimental group statistically (p ≤ 0.05); however, in terms of control group, there were no significant differences between the pre-test post-test averages statistically. Between the pre-test post-test averages of experimental and control groups there were not any statistically significant differences (p > 0.05). It was found out in the study that the peer education programme is not effective on the smoking habit of Vocational Health School students. When the future studies are being planned in order to evaluate the peer education activity, it can be taken into consideration that the peer education takes a long term and the students in the educator group will be more enthusiastic and a kind of leader in their environment.Keywords: midwifery, peer, peer education, smoking
Procedia PDF Downloads 2224248 Software Architecture Implications on Development Productivity: A Case of Malawi Point of Care Electronic Medical Records
Authors: Emmanuel Mkambankhani, Tiwonge Manda
Abstract:
Software platform architecture includes system components, their relationships, and design, as well as evolution principles. Software architecture and documentation affect a platform's customizability and openness to external innovators, thus affecting developer productivity. Malawi Point of Care (POC) Electronic Medical Records System (EMRS) follows some architectural design standards, but it lacks third-party innovators and is difficult to customize as compared to CommCare and District Health Information System 2 (DHIS2). Improving software architecture and documentation for the Malawi POC will increase productivity and third-party contributions. A conceptual framework based on Generativity and Boundary Resource Model (BRM) was used to compare the three platforms. Interviews, observations, and document analysis were used to collect primary and secondary data. Themes were found by analyzing qualitative and quantitative data, which led to the following results. Configurable, flexible, and cross-platform software platforms and the availability of interfaces (Boundary Resources) that let internal and external developers interact with the platform's core functionality, hence boosting developer productivity. Furthermore, documentation increases developer productivity, while its absence inhibits the use of resources. The study suggests that the architecture and openness of the Malawi POC EMR software platform will be improved by standardizing web application program interfaces (APIs) and making interfaces that can be changed by the user. In addition, increasing the availability of documentation and training will improve the use of boundary resources, thus improving internal and third-party development productivity.Keywords: health systems, configurable platforms, software architecture, software documentation, software development productivity
Procedia PDF Downloads 874247 Discriminant Shooting-Related Statistics between Winners and Losers 2023 FIBA U19 Basketball World Cup
Authors: Navid Ebrahmi Madiseh, Sina Esfandiarpour-Broujeni, Rahil Razeghi
Abstract:
Introduction: Quantitative analysis of game-related statistical parameters is widely used to evaluate basketball performance at both individual and team levels. Non-free throw shooting plays a crucial role as the primary scoring method, holding significant importance in the game's technical aspect. It has been explored the predictive value of game-related statistics in relation to various contextual and situational variables. Many similarities and differences also have been found between different age groups and levels of competition. For instance, in the World Basketball Championships after the 2010 rule change, 2-point field goals distinguished winners from losers in women's games but not in men's games, and the impact of successful 3-point field goals on women's games was minimal. The study aimed to identify and compare discriminant shooting-related statistics between winning and losing teams in men’s and women’s FIBA-U19-Basketball-World-Cup-2023 tournaments. Method: Data from 112 observations (2 per game) of 16 teams (for each gender) in the FIBA-U19-Basketball-World-Cup-2023 were selected as samples. The data were obtained from the official FIBA website using Python. Specific information was extracted, organized into a DataFrame, and consisted of twelve variables, including shooting percentages, attempts, and scoring ratio for 3-pointers, mid-range shots, paint shots, and free throws. Made% = scoring type successful attempts/scoring type total attempts¬ (1)Free-throw-pts% (free throw score ratio) = (free throw score/total score) ×100 (2)Mid-pts% (mid-range score ratio) = (mid-range score/total score) ×100 (3) Paint-pts% (paint score ratio) = (Paint score/total score) ×100 (4) 3p_pts% (three-point score ratio) = (three-point score/total score) ×100 (5) Independent t-tests were used to examine significant differences in shooting-related statistical parameters between winning and losing teams for both genders. Statistical significance was p < 0.05. All statistical analyses were completed with SPSS, Version 18. Results: The results showed that 3p-made%, mid-pts%, paint-made%, paint-pts%, mid-attempts, and paint-attempts were significantly different between winners and losers in men (t=-3.465, P<0.05; t=3.681, P<0.05; t=-5.884, P<0.05; t=-3.007, P<0.05; t=2.549, p<0.05; t=-3.921, P<0.05). For women, significant differences between winners and losers were found for 3p-made%, 3p-pts%, paint-made%, and paint-attempt (t=-6.429, P<0.05; t=-1.993, P<0.05; t=-1.993, P<0.05; t=-4.115, P<0.05; t=02.451, P<0.05). Discussion: The research aimed to compare shooting-related statistics between winners and losers in men's and women's teams at the FIBA-U19-Basketball-World-Cup-2023. Results indicated that men's winners excelled in 3p-made%, paint-made%, paint-pts%, paint-attempts, and mid-attempt, consistent with previous studies. This study found that losers in men’s teams had higher mid-pts% than winners, which was inconsistent with previous findings. It has been indicated that winners tend to prioritize statistically efficient shots while forcing the opponent to take mid-range shots. In women's games, significant differences in 3p-made%, 3p-pts%, paint-made%, and paint-attempts were observed, indicating that winners relied on riskier outside scoring strategies. Overall, winners exhibited higher accuracy in paint and 3P shooting than losers, but they also relied more on outside offensive strategies. Additionally, winners acquired a higher ratio of their points from 3P shots, which demonstrates their confidence in their skills and willingness to take risks at this competitive level.Keywords: gender, losers, shoot-statistic, U19, winners
Procedia PDF Downloads 954246 Construction and Optimization of Green Infrastructure Network in Mountainous Counties Based on Morphological Spatial Pattern Analysis and Minimum Cumulative Resistance Models: A Case Study of Shapingba District, Chongqing
Authors: Yuning Guan
Abstract:
Under the background of rapid urbanization, mountainous counties need to break through mountain barriers for urban expansion due to undulating topography, resulting in ecological problems such as landscape fragmentation and reduced biodiversity. Green infrastructure networks are constructed to alleviate the contradiction between urban expansion and ecological protection, promoting the healthy and sustainable development of urban ecosystems. This study applies the MSPA model, the MCR model and Linkage Mapper Tools to identify eco-sources and eco-corridors in the Shapingba District of Chongqing and combined with landscape connectivity assessment and circuit theory to delineate the importance levels to extract ecological pinch point areas on the corridors. The results show that: (1) 20 ecological sources are identified, with a total area of 126.47 km², accounting for 31.88% of the study area, and showing a pattern of ‘one core, three corridors, multi-point distribution’. (2) 37 ecological corridors are formed in the area, with a total length of 62.52km, with a ‘more in the west, less in the east’ pattern. (3) 42 ecological pinch points are extracted, accounting for 25.85% of the length of the corridors, which are mainly distributed in the eastern new area. Accordingly, this study proposes optimization strategies for sub-area protection of ecological sources, grade-level construction of ecological corridors, and precise restoration of ecological pinch points.Keywords: green infrastructure network, morphological spatial pattern, minimal cumulative resistance, mountainous counties, circuit theory, shapingba district
Procedia PDF Downloads 424245 Analysis of Senior Secondary II Students Performance/Approaches Exhibited in Solving Circle Geometry
Authors: Mukhtari Hussaini Muhammad, Abba Adamu
Abstract:
The paper will examine the approaches and solutions that will be offered by Senior Secondary School II Students (Demonstration Secondary School, Azare Bauchi State Northern Nigeria – Hausa/ Fulani predominant area) toward solving exercises related to the circle theorem. The angle that an arc of a circle subtends at the center is twice that which it subtends at any point on the remaining part of the circumference. The Students will be divided in to 2 groups by given them numbers 1, 2; 1, 2; 1, 2, then all 1s formed group I and all 2s formed group II. Group I will be considered as control group in which the traditional method will be applied during instructions. Thus, the researcher will revise the concept of circle, state the theorem, prove the theorem and then solve examples. Group II, experimental group in which the concept of circle will be revised to the students and then the students will be asked to draw different circles, mark arcs, draw angle at the center, angle at the circumference then measure the angles constructed. The students will be asked to explain what they can infer/deduce from the angles measured and lastly, examples will be solved. During the next contact day, both groups will be subjected to solving exercises in the classroom related to the theorem. The angle that an arc of a circle subtends at the center is twice that which it subtends at any point on the remaining part of circumference. The solution to the exercises will be marked, the scores compared/analysed using relevant statistical tool. It is expected that group II will perform better because of the method/ technique followed during instructions is more learner-centered. By exploiting the talents of the individual learners through listening to the views and asking them how they arrived at a solution will really improve learning and understanding.Keywords: circle theorem, control group, experimental group, traditional method
Procedia PDF Downloads 1914244 ChaQra: A Cellular Unit of the Indian Quantum Network
Authors: Shashank Gupta, Iteash Agarwal, Vijayalaxmi Mogiligidda, Rajesh Kumar Krishnan, Sruthi Chennuri, Deepika Aggarwal, Anwesha Hoodati, Sheroy Cooper, Ranjan, Mohammad Bilal Sheik, Bhavya K. M., Manasa Hegde, M. Naveen Krishna, Amit Kumar Chauhan, Mallikarjun Korrapati, Sumit Singh, J. B. Singh, Sunil Sud, Sunil Gupta, Sidhartha Pant, Sankar, Neha Agrawal, Ashish Ranjan, Piyush Mohapatra, Roopak T., Arsh Ahmad, Nanjunda M., Dilip Singh
Abstract:
Major research interests on quantum key distribution (QKD) are primarily focussed on increasing 1. point-to-point transmission distance (1000 Km), 2. secure key rate (Mbps), 3. security of quantum layer (device-independence). It is great to push the boundaries on these fronts, but these isolated approaches are neither scalable nor cost-effective due to the requirements of specialised hardware and different infrastructure. Current and future QKD network requires addressing different sets of challenges apart from distance, key rate, and quantum security. In this regard, we present ChaQra -a sub-quantum network with core features as 1) Crypto agility (integration in the already deployed telecommunication fibres), 2) Software defined networking (SDN paradigm for routing different nodes), 3) reliability (addressing denial-of-service with hybrid quantum safe cryptography), 4) upgradability (modules upgradation based on scientific and technological advancements), 5) Beyond QKD (using QKD network for distributed computing, multi-party computation etc). Our results demonstrate a clear path to create and accelerate quantum secure Indian subcontinent under the national quantum mission.Keywords: quantum network, quantum key distribution, quantum security, quantum information
Procedia PDF Downloads 534243 Thermal Image Segmentation Method for Stratification of Freezing Temperatures
Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka
Abstract:
The study uses an image analysis technique employing thermal imaging to measure the percentage of areas with various temperatures on a freezing surface. An image segmentation method using threshold values is applied to a sequence of image recording the freezing process. The phenomenon is transient and temperatures vary fast to reach the freezing point and complete the freezing process. Freezing salt water is subjected to the salt rejection that makes the freezing point dynamic and dependent on the salinity at the phase interface. For a specific area of freezing, nucleation starts from one side and end to another side, which causes a dynamic and transient temperature in that area. Thermal cameras are able to reveal a difference in temperature due to their sensitivity to infrared radiance. Using Experimental setup, a video is recorded by a thermal camera to monitor radiance and temperatures during the freezing process. Image processing techniques are applied to all frames to detect and classify temperatures on the surface. Image processing segmentation method is used to find contours with same temperatures on the icing surface. Each segment is obtained using the temperature range appeared in the image and correspond pixel values in the image. Using the contours extracted from image and camera parameters, stratified areas with different temperatures are calculated. To observe temperature contours on the icing surface using the thermal camera, the salt water sample is dropped on a cold surface with the temperature of -20°C. A thermal video is recorded for 2 minutes to observe the temperature field. Examining the results obtained by the method and the experimental observations verifies the accuracy and applicability of the method.Keywords: ice contour boundary, image processing, image segmentation, salt ice, thermal image
Procedia PDF Downloads 3184242 The Capabilities of New Communication Devices in Development of Informing: Case Study Mobile Functions in Iran
Authors: Mohsen Shakerinejad
Abstract:
Due to the growing momentum of technology, the present age is called age of communication and information. And With Astounding progress of Communication and information tools, current world Is likened to the "global village". That a message can be sent from one point to another point of the world in a Time scale Less than a minute. However, one of the new sociologists -Alain Touraine- in describing the destructive effects of new changes arising from the development of information appliances refers to the "new fields for undemocratic social control And the incidence of acute and unrest social and political tensions", Yet, in this era That With the advancement of the industry, the life of people has been industrial too, quickly and accurately Data Transfer, Causes Blowing new life in the Body of Society And according to the features of each society and the progress of science and technology, Various tools should be used. One of these communication tools is Mobile. Cellular phone As Communication and telecommunication revolution in recent years, Has had a great influence on the individual and collective life of societies. This powerful communication tool Have had an Undeniable effect, On all aspects of life, including social, economic, cultural, scientific, etc. so that Ignoring It in Design, Implementation and enforcement of any system is not wise. Nowadays knowledge and information are one of the most important aspects of human life. Therefore, in this article, it has been tried to introduce mobile potentials in receive and transmit News and Information. As it follows, among the numerous capabilities of current mobile phones features such as sending text, photography, sound recording, filming, and Internet connectivity could indicate the potential of this medium of communication in the process of sending and receiving information. So that nowadays, mobile journalism as an important component of citizen journalism Has a unique role in information dissemination.Keywords: mobile, informing, receiving information, mobile journalism, citizen journalism
Procedia PDF Downloads 4094241 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.Keywords: harmonics, passive filter, power factor, power quality
Procedia PDF Downloads 3054240 Role of Finance in Firm Innovation and Growth: Evidence from African Countries
Authors: Gebrehiwot H., Giorgis Bahita
Abstract:
Firms in Africa experience less financial market in comparison to other emerging and developed countries, thus lagging behind the rest of the world in terms of innovation and growth. Though there are different factors to be considered, underdeveloped financial systems take the lion's share in hindering firm innovation and growth in Africa. Insufficient capacity to innovate is one of the problems facing African businesses. Moreover, a critical challenge faced by firms in Africa is access to finance and the inability of financially constrained firms to grow. Only little is known about how different sources of finance affect firm innovation and growth in Africa, specifically the formal and informal finance effect on firm innovation and growth. This study's aim is to address this gap by using formal and informal finance for working capital and fixed capital and its role in firm innovation and firm growth using firm-level data from the World Bank enterprise survey 2006-2019 with a total of 5661 sample firms from 14 countries based on available data on the selected variables. Additionally, this study examines factors for accessing credit from a formal financial institution. The logit model is used to examine the effect of finance on a firm’s innovation and factors to access formal finance, while the Ordinary List Square (OLS) regression mode is used to investigate the effect of finance on firm growth. 2SLS instrumental variables are used to address the possible endogeneity problem in firm growth and finance-innovation relationships. A result from the logistic regression indicates that both formal and informal finance used for working capital and investment in fixed capital was found to have a significant positive association with product and process innovation. In the case of finance and growth, finding show that positive association of both formal and informal financing to working capital and new investment in fixed capital though the informal has positive relations to firm growth as measured by sale growth but no significant association as measured by employment growth. Formal finance shows more magnitude of effect on innovation and growth when firms use formal finance to finance investment in fixed capital, while informal finance show less compared to formal finance and this confirms previous studies as informal is mainly used for working capital in underdeveloped economies like Africa. The factors that determine credit access: Age, firm size, managerial experience, exporting, gender, and foreign ownership are found to have significant determinant factors in accessing credit from formal and informal sources among the selected sample countries.Keywords: formal finance, informal finance, innovation, growth
Procedia PDF Downloads 754239 Imaging 255nm Tungsten Thin Film Adhesion with Picosecond Ultrasonics
Authors: A. Abbas, X. Tridon, J. Michelon
Abstract:
In the electronic or in the photovoltaic industries, components are made from wafers which are stacks of thin film layers of a few nanometers to serval micrometers thickness. Early evaluation of the bounding quality between different layers of a wafer is one of the challenges of these industries to avoid dysfunction of their final products. Traditional pump-probe experiments, which have been developed in the 70’s, give a partial solution to this problematic but with a non-negligible drawback. In fact, on one hand, these setups can generate and detect ultra-high ultrasounds frequencies which can be used to evaluate the adhesion quality of wafer layers. But, on the other hand, because of the quiet long acquisition time they need to perform one measurement, these setups remain shut in punctual measurement to evaluate global sample quality. This last point can lead to bad interpretation of the sample quality parameters, especially in the case of inhomogeneous samples. Asynchronous Optical Sampling (ASOPS) systems can perform sample characterization with picosecond acoustics up to 106 times faster than traditional pump-probe setups. This last point allows picosecond ultrasonic to unlock the acoustic imaging field at the nanometric scale to detect inhomogeneities regarding sample mechanical properties. This fact will be illustrated by presenting an image of the measured acoustical reflection coefficients obtained by mapping, with an ASOPS setup, a 255nm thin-film tungsten layer deposited on a silicone substrate. Interpretation of the coefficient reflection in terms of bounding quality adhesion will also be exposed. Origin of zones which exhibit good and bad quality bounding will be discussed.Keywords: adhesion, picosecond ultrasonics, pump-probe, thin film
Procedia PDF Downloads 1584238 Detection of the Effectiveness of Training Courses and Their Limitations Using CIPP Model (Case Study: Isfahan Oil Refinery)
Authors: Neda Zamani
Abstract:
The present study aimed to investigate the effectiveness of training courses and their limitations using the CIPP model. The investigations were done on Isfahan Refinery as a case study. From a purpose point of view, the present paper is included among applied research and from a data gathering point of view, it is included among descriptive research of the field type survey. The population of the study included participants in training courses, their supervisors and experts of the training department. Probability-proportional-to-size (PPS) was used as the sampling method. The sample size for participants in training courses included 195 individuals, 30 supervisors and 11 individuals from the training experts’ group. To collect data, a questionnaire designed by the researcher and a semi-structured interview was used. The content validity of the data was confirmed by training management experts and the reliability was calculated through 0.92 Cronbach’s alpha. To analyze the data in descriptive statistics aspect (tables, frequency, frequency percentage and mean) were applied, and inferential statistics (Mann Whitney and Wilcoxon tests, Kruskal-Wallis test to determine the significance of the opinion of the groups) have been applied. Results of the study indicated that all groups, i.e., participants, supervisors and training experts, absolutely believe in the importance of training courses; however, participants in training courses regard content, teacher, atmosphere and facilities, training process, managing process and product as to be in a relatively appropriate level. The supervisors also regard output to be at a relatively appropriate level, but training experts regard content, teacher and managing processes as to be in an appropriate and higher than average level.Keywords: training courses, limitations of training effectiveness, CIPP model, Isfahan oil refinery company
Procedia PDF Downloads 744237 An Experimental Approach to the Influence of Tipping Points and Scientific Uncertainties in the Success of International Fisheries Management
Authors: Jules Selles
Abstract:
The Atlantic and Mediterranean bluefin tuna fishery have been considered as the archetype of an overfished and mismanaged fishery. This crisis has demonstrated the role of public awareness and the importance of the interactions between science and management about scientific uncertainties. This work aims at investigating the policy making process associated with a regional fisheries management organization. We propose a contextualized computer-based experimental approach, in order to explore the effects of key factors on the cooperation process in a complex straddling stock management setting. Namely, we analyze the effects of the introduction of a socio-economic tipping point and the uncertainty surrounding the estimation of the resource level. Our approach is based on a Gordon-Schaefer bio-economic model which explicitly represents the decision making process. Each participant plays the role of a stakeholder of ICCAT and represents a coalition of fishing nations involved in the fishery and decide unilaterally a harvest policy for the coming year. The context of the experiment induces the incentives for exploitation and collaboration to achieve common sustainable harvest plans at the Atlantic bluefin tuna stock scale. Our rigorous framework allows testing how stakeholders who plan the exploitation of a fish stock (a common pool resource) respond to two kinds of effects: i) the inclusion of a drastic shift in the management constraints (beyond a socio-economic tipping point) and ii) an increasing uncertainty in the scientific estimation of the resource level.Keywords: economic experiment, fisheries management, game theory, policy making, Atlantic Bluefin tuna
Procedia PDF Downloads 2534236 Mobile Network Users Amidst Ultra-Dense Networks in 5G Using an Improved Coordinated Multipoint (CoMP) Technology
Authors: Johnson O. Adeogo, Ayodele S. Oluwole, O. Akinsanmi, Olawale J. Olaluyi
Abstract:
In this 5G network, very high traffic density in densely populated areas, most especially in densely populated areas, is one of the key requirements. Radiation reduction becomes one of the major concerns to secure the future life of mobile network users in ultra-dense network areas using an improved coordinated multipoint technology. Coordinated Multi-Point (CoMP) is based on transmission and/or reception at multiple separated points with improved coordination among them to actively manage the interference for the users. Small cells have two major objectives: one, they provide good coverage and/or performance. Network users can maintain a good quality signal network by directly connecting to the cell. Two is using CoMP, which involves the use of multiple base stations (MBS) to cooperate by transmitting and/or receiving at the same time in order to reduce the possibility of electromagnetic radiation increase. Therefore, the influence of the screen guard with rubber condom on the mobile transceivers as one major piece of equipment radiating electromagnetic radiation was investigated by mobile network users amidst ultra-dense networks in 5g. The results were compared with the same mobile transceivers without screen guards and rubber condoms under the same network conditions. The 5 cm distance from the mobile transceivers was measured with the help of a ruler, and the intensity of Radio Frequency (RF) radiation was measured using an RF meter. The results show that the intensity of radiation from various mobile transceivers without screen guides and condoms was higher than the mobile transceivers with screen guides and condoms when call conversation was on at both ends.Keywords: ultra-dense networks, mobile network users, 5g, coordinated multi-point.
Procedia PDF Downloads 1024235 Classification on Statistical Distributions of a Complex N-Body System
Authors: David C. Ni
Abstract:
Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification
Procedia PDF Downloads 3094234 Trinary Affinity—Mathematic Verification and Application (1): Construction of Formulas for the Composite and Prime Numbers
Authors: Liang Ming Zhong, Yu Zhong, Wen Zhong, Fei Fei Yin
Abstract:
Trinary affinity is a description of existence: every object exists as it is known and spoken of, in a system of 2 differences (denoted dif1, dif₂) and 1 similarity (Sim), equivalently expressed as dif₁ / Sim / dif₂ and kn / 0 / tkn (kn = the known, tkn = the 'to be known', 0 = the zero point of knowing). They are mathematically verified and illustrated in this paper by the arrangement of all integers onto 3 columns, where each number exists as a difference in relation to another number as another difference, and the 2 difs as arbitrated by a third number as the Sim, resulting in a trinary affinity or trinity of 3 numbers, of which one is the known, the other the 'to be known', and the third the zero (0) from which both the kn and tkn are measured and specified. Consequently, any number is horizontally specified either as 3n, or as '3n – 1' or '3n + 1', and vertically as 'Cn + c', so that any number seems to occur at the intersection of its X and Y axes and represented by its X and Y coordinates, as any point on Earth’s surface by its latitude and longitude. Technically, i) primes are viewed and treated as progenitors, and composites as descending from them, forming families of composites, each capable of being measured and specified from its own zero called in this paper the realistic zero (denoted 0r, as contrasted to the mathematic zero, 0m), which corresponds to the constant c, and the nature of which separates the composite and prime numbers, and ii) any number is considered as having a magnitude as well as a position, so that a number is verified as a prime first by referring to its descriptive formula and then by making sure that no composite number can possibly occur on its position, by dividing it with factors provided by the composite number formulas. The paper consists of 3 parts: 1) a brief explanation of the trinary affinity of things, 2) the 8 formulas that represent ALL the primes, and 3) families of composite numbers, each represented by a formula. A composite number family is described as 3n + f₁‧f₂. Since there are an infinitely large number of composite number families, to verify the primality of a great probable prime, we have to have it divided with several or many a f₁ from a range of composite number formulas, a procedure that is as laborious as it is the surest way to verifying a great number’s primality. (So, it is possible to substitute planned division for trial division.)Keywords: trinary affinity, difference, similarity, realistic zero
Procedia PDF Downloads 2094233 A Handheld Light Meter Device for Methamphetamine Detection in Oral Fluid
Authors: Anindita Sen
Abstract:
Oral fluid is a promising diagnostic matrix for drugs of abuse compared to urine and serum. Detection of methamphetamine in oral fluid would pave way for the easy evaluation of impairment in drivers during roadside drug testing as well as ensure safe working environments by facilitating evaluation of impairment in employees at workplaces. A membrane-based point-of-care (POC) friendly pre-treatment technique has been developed which aided elimination of interferences caused by salivary proteins and facilitated the demonstration of methamphetamine detection in saliva using a gold nanoparticle based colorimetric aptasensor platform. It was found that the colorimetric response in saliva was always suppressed owing to the matrix effects. By navigating the challenging interfering issues in saliva, we were successfully able to detect methamphetamine at nanomolar levels in saliva offering immense promise for the translation of these platforms for on-site diagnostic systems. This subsequently motivated the development of a handheld portable light meter device that can reliably transduce the aptasensors colorimetric response into absorbance, facilitating quantitative detection of analyte concentrations on-site. This is crucial due to the prevalent unreliability and sensitivity problems of the conventional drug testing kits. The fabricated light meter device response was validated against a standard UV-Vis spectrometer to confirm reliability. The portable and cost-effective handheld detector device features sensitivity comparable to the well-established UV-Vis benchtop instrument and the easy-to-use device could potentially serve as a prototype for a commercial device in the future.Keywords: aptasensors, colorimetric gold nanoparticle assay, point-of-care, oral fluid
Procedia PDF Downloads 554232 The Extent of Virgin Olive-Oil Prices' Distribution Revealing the Behavior of Market Speculators
Authors: Fathi Abid, Bilel Kaffel
Abstract:
The olive tree, the olive harvest during winter season and the production of olive oil better known by professionals under the name of the crushing operation have interested institutional traders such as olive-oil offices and private companies such as food industry refining and extracting pomace olive oil as well as export-import public and private companies specializing in olive oil. The major problem facing producers of olive oil each winter campaign, contrary to what is expected, it is not whether the harvest will be good or not but whether the sale price will allow them to cover production costs and achieve a reasonable margin of profit or not. These questions are entirely legitimate if we judge by the importance of the issue and the heavy complexity of the uncertainty and competition made tougher by a high level of indebtedness and the experience and expertise of speculators and producers whose objectives are sometimes conflicting. The aim of this paper is to study the formation mechanism of olive oil prices in order to learn about speculators’ behavior and expectations in the market, how they contribute by their industry knowledge and their financial alliances and the size the financial challenge that may be involved for them to build private information hoses globally to take advantage. The methodology used in this paper is based on two stages, in the first stage we study econometrically the formation mechanisms of olive oil price in order to understand the market participant behavior by implementing ARMA, SARMA, GARCH and stochastic diffusion processes models, the second stage is devoted to prediction purposes, we use a combined wavelet- ANN approach. Our main findings indicate that olive oil market participants interact with each other in a way that they promote stylized facts formation. The unstable participant’s behaviors create the volatility clustering, non-linearity dependent and cyclicity phenomena. By imitating each other in some periods of the campaign, different participants contribute to the fat tails observed in the olive oil price distribution. The best prediction model for the olive oil price is based on a back propagation artificial neural network approach with input information based on wavelet decomposition and recent past history.Keywords: olive oil price, stylized facts, ARMA model, SARMA model, GARCH model, combined wavelet-artificial neural network, continuous-time stochastic volatility mode
Procedia PDF Downloads 3384231 Ferromagnetic Potts Models with Multi Site Interaction
Authors: Nir Schreiber, Reuven Cohen, Simi Haber
Abstract:
The Potts model has been widely explored in the literature for the last few decades. While many analytical and numerical results concern with the traditional two site interaction model in various geometries and dimensions, little is yet known about models where more than two spins simultaneously interact. We consider a ferromagnetic four site interaction Potts model on the square lattice (FFPS), where the four spins reside in the corners of an elementary square. Each spin can take an integer value 1,2,...,q. We write the partition function as a sum over clusters consisting of monochromatic faces. When the number of faces becomes large, tracing out spin configurations is equivalent to enumerating large lattice animals. It is known that the asymptotic number of animals with k faces is governed by λᵏ, with λ ≈ 4.0626. Based on this observation, systems with q < 4 and q > 4 exhibit a second and first order phase transitions, respectively. The transition nature of the q = 4 case is borderline. For any q, a critical giant component (GC) is formed. In the finite order case, GC is simple, while it is fractal when the transition is continuous. Using simple equilibrium arguments, we obtain a (zero order) bound on the transition point. It is claimed that this bound should apply for other lattices as well. Next, taking into account higher order sites contributions, the critical bound becomes tighter. Moreover, for q > 4, if corrections due to contributions from small clusters are negligible in the thermodynamic limit, the improved bound should be exact. The improved bound is used to relate the critical point to the finite correlation length. Our analytical predictions are confirmed by an extensive numerical study of FFPS, using the Wang-Landau method. In particular, the q=4 marginal case is supported by a very ambiguous pseudo-critical finite size behavior.Keywords: entropic sampling, lattice animals, phase transitions, Potts model
Procedia PDF Downloads 1574230 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems
Authors: Bassam Istanbouli
Abstract:
With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them. In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies; the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system. Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.Keywords: blueprint, ERP, modular, normalized
Procedia PDF Downloads 1394229 Changes in Geospatial Structure of Households in the Czech Republic: Findings from Population and Housing Census
Authors: Jaroslav Kraus
Abstract:
Spatial information about demographic processes are a standard part of outputs in the Czech Republic. That was also the case of Population and Housing Census which was held on 2011. This is a starting point for a follow up study devoted to two basic types of households: single person households and households of one completed family. Single person households and one family households create more than 80 percent of all households, but the share and spatial structure is in long-term changing. The increase of single households is results of long-term fertility decrease and divorce increase, but also possibility of separate living. There are regions in the Czech Republic with traditional demographic behavior, and regions like capital Prague and some others with changing pattern. Population census is based - according to international standards - on the concept of currently living population. Three types of geospatial approaches will be used for analysis: (i) firstly measures of geographic distribution, (ii) secondly mapping clusters to identify the locations of statistically significant hot spots, cold spots, spatial outliers, and similar features and (iii) finally analyzing pattern approach as a starting point for more in-depth analyses (geospatial regression) in the future will be also applied. For analysis of this type of data, number of households by types should be distinct objects. All events in a meaningful delimited study region (e.g. municipalities) will be included in an analysis. Commonly produced measures of central tendency and spread will include: identification of the location of the center of the point set (by NUTS3 level); identification of the median center and standard distance, weighted standard distance and standard deviational ellipses will be also used. Identifying that clustering exists in census households datasets does not provide a detailed picture of the nature and pattern of clustering but will be helpful to apply simple hot-spot (and cold spot) identification techniques to such datasets. Once the spatial structure of households will be determined, any particular measure of autocorrelation can be constructed by defining a way of measuring the difference between location attribute values. The most widely used measure is Moran’s I that will be applied to municipal units where numerical ratio is calculated. Local statistics arise naturally out of any of the methods for measuring spatial autocorrelation and will be applied to development of localized variants of almost any standard summary statistic. Local Moran’s I will give an indication of household data homogeneity and diversity on a municipal level.Keywords: census, geo-demography, households, the Czech Republic
Procedia PDF Downloads 95