Search results for: performance of process
15008 Reworking of the Anomalies in the Discounted Utility Model as a Combination of Cognitive Bias and Decrease in Impatience: Decision Making in Relation to Bounded Rationality and Emotional Factors in Intertemporal Choices
Authors: Roberta Martino, Viviana Ventre
Abstract:
Every day we face choices whose consequences are deferred in time. These types of choices are the intertemporal choices and play an important role in the social, economic, and financial world. The Discounted Utility Model is the mathematical model of reference to calculate the utility of intertemporal prospects. The discount rate is the main element of the model as it describes how the individual perceives the indeterminacy of subsequent periods. Empirical evidence has shown a discrepancy between the behavior expected from the predictions of the model and the effective choices made from the decision makers. In particular, the term temporal inconsistency indicates those choices that do not remain optimal with the passage of time. This phenomenon has been described with hyperbolic models of the discount rate which, unlike the linear or exponential nature assumed by the discounted utility model, is not constant over time. This paper explores the problem of inconsistency by tracing the decision-making process through the concept of impatience. The degree of impatience and the degree of decrease of impatience are two parameters that allow to quantify the weight of emotional factors and cognitive limitations during the evaluation and selection of alternatives. In fact, although the theory assumes perfectly rational decision makers, behavioral finance and cognitive psychology have made it possible to understand that distortions in the decision-making process and emotional influence have an inevitable impact on the decision-making process. The degree to which impatience is diminished is the focus of the first part of the study. By comparing consistent and inconsistent preferences over time, it was possible to verify that some anomalies in the discounted utility model are a result of the combination of cognitive bias and emotional factors. In particular: the delay effect and the interval effect are compared through the concept of misperception of time; starting from psychological considerations, a criterion is proposed to identify the causes of the magnitude effect that considers the differences in outcomes rather than their ratio; the sign effect is analyzed by integrating in the evaluation of prospects with negative outcomes the psychological aspects of loss aversion provided by Prospect Theory. An experiment implemented confirms three findings: the greatest variation in the degree of decrease in impatience corresponds to shorter intervals close to the present; the greatest variation in the degree of impatience occurs for outcomes of lower magnitude; the variation in the degree of impatience is greatest for negative outcomes. The experimental phase was implemented with the construction of the hyperbolic factor through the administration of questionnaires constructed for each anomaly. This work formalizes the underlying causes of the discrepancy between the discounted utility model and the empirical evidence of preference reversal.Keywords: decreasing impatience, discount utility model, hyperbolic discount, hyperbolic factor, impatience
Procedia PDF Downloads 10215007 The Competitiveness of Small and Medium Sized Enterprises: Digital Transformation of Business Models
Authors: Chante Van Tonder, Bart Bossink, Chris Schachtebeck, Cecile Nieuwenhuizen
Abstract:
Small and Medium-Sized Enterprises (SMEs) play a key role in national economies around the world, being contributors to economic and social well-being. Due to this, the success, growth and competitiveness of SMEs are critical. However, there are many factors that undermine this, such as resource constraints, poor information communication infrastructure (ICT), skills shortages and poor management. The Fourth Industrial Revolution offers new tools and opportunities such as digital transformation and business model innovation (BMI) to the SME sector to enhance its competitiveness. Adopting and leveraging digital technologies such as cloud, mobile technologies, big data and analytics can significantly improve business efficiencies, value proposition and customer experiences. Digital transformation can contribute to the growth and competitiveness of SMEs. However, SMEs are lagging behind in the participation of digital transformation. Extant research lacks conceptual and empirical research on how digital transformation drives BMI and the impact it has on the growth and competitiveness of SMEs. The purpose of the study is, therefore, to close this gap by developing and empirically validating a conceptual model to determine if SMEs are achieving BMI through digital transformation and how this is impacting the growth, competitiveness and overall business performance. An empirical study is being conducted on 300 SMEs, consisting of 150 South-African and 150 Dutch SMEs, to achieve this purpose. Structural equation modeling is used, since it is a multivariate statistical analysis technique that is used to analyse structural relationships and is a suitable research method to test the hypotheses in the model. Empirical research is needed to gather more insight into how and if SMEs are digitally transformed and how BMI can be driven through digital transformation. The findings of this study can be used by SME business owners, managers and employees at all levels. The findings will indicate if digital transformation can indeed impact the growth, competitiveness and overall performance of an SME, reiterating the importance and potential benefits of adopting digital technologies. In addition, the findings will also exhibit how BMI can be achieved in light of digital transformation. This study contributes to the body of knowledge in a highly relevant and important topic in management studies by analysing the impact of digital transformation on BMI on a large number of SMEs that are distinctly different in economic and cultural factorsKeywords: business models, business model innovation, digital transformation, SMEs
Procedia PDF Downloads 23715006 Problem Gambling in the Conceptualization of Health Professionals: A Qualitative Analysis of the Discourses Produced by Psychologists, Psychiatrists and General Practitioners
Authors: T. Marinaci, C. Venuleo
Abstract:
Different conceptualizations of disease affect patient care. This study aims to address this gap. It explores how health professionals conceptualize gambling problem, addiction and the goals of recovery process. In-depth, semi-structured, open-ended interviews were conducted with Italian psychologists, psychiatrists, general practitioners, and support staff (N= 114), working within health centres for the treatment of addiction (public health services or therapeutic communities) or medical offices. A Lexical Correspondence Analysis (LCA) was applied to the verbatim transcripts. LCA allowed to identify two main factorial dimensions, which organize similarity and dissimilarity in the discourses of the interviewed. The first dimension labelled 'Models of relationship with the problem', concerns two different models of relationship with the health problem: one related to the request for help and the process of taking charge and the other related to the identification of the psychopathology underlying the disorder. The second dimension, labelled 'Organisers of the intervention' reflects the dialectic between two ways to address the problem. On the one hand, they are the gambling dynamics and its immediate life-consequences to organize the intervention (whatever the request of the user is); on the other hand, they are the procedures and the tools which characterize the health service to organize the way the professionals deal with the user’ s problem (whatever it is and despite the specify of the user’s request). The results highlight how, despite the differences, the respondents share a central assumption: understanding gambling problem implies the reference to the gambler’s identity, more than, for instance, to the relational, social, cultural or political context where the gambler lives. A passive stance is attributed to the user, who does not play any role in the definition of the goal of the intervention. The results will be discussed to highlight the relationship between professional models and users’ ways to understand and deal with the problems related to gambling.Keywords: cultural models, health professionals, intervention models, problem gambling
Procedia PDF Downloads 15415005 Synthesis and Characterization of Iron and Aluminum-Containing AFm Phases
Authors: Aurore Lechevallier, Mohend Chaouche, Jerome Soudier, Guillaume Renaudin
Abstract:
The cement industry accounts for 8% of the global CO₂ emissions, and approximately 60% of these emissions are associated with the Portland cement clinker production from the decarbonization of limestone (CaCO3). Their impact on the greenhouse effect results in growing social awareness. Therefore, CO2 footprint becomes a product selection choice, and substituting Portland cement with a lower CO2-footprint alternative binder is sought. In this context, new hydraulic binders have been studied as a potential Ordinary Portland Cement substitute. Many of them are composed of iron oxides and aluminum oxides, present in the Ca₄Al₂-xFe₂+ₓO₁₀-like phase and forming Ca-LDH (i.e. AFM) as a hydration product. It has become essential to study the possible existence of Fe/Al AFM solid solutions to characterize the hydration process properly. Ca₂Al₂-xFex(OH)₆.X.nH₂O layered AFM samples intercalated with either nitrate or chloride X anions were synthesized based on the co-precipitation method under a nitrogen atmosphere to avoid the carbonation effect.AFM samples intercalated with carbonate anions were synthesized based on the anionic exchange process, using AFM-NO₃ as the source material. These three AFM samples were synthesized with varying Fe/Al molar ratios. The experimental conditions were optimized to make possible the formation of Al-AFM and Fe-AFM using the same parameters (namely pH value and salt concentration). Rietveld refinements were performed to demonstrate the existence of a solid solution between the two trivalent metallic end members. Spectroscopic analyses were used to confirm the intercalation of the targeted anion; secondary electron images were taken to analyze the AFM samples’ morphology, and energy dispersive X-ray spectroscopy (EDX) was carried out to determine the elemental composition of the AFM samples. Results of this study make it possible to quantify the Al/Fe ratio of the AFM phases precipitated in our hydraulic binder, thanks to the determined Vegard's law characteristic to the corresponding solid solutionsKeywords: AFm phase, iron-rich binder, low-carbon cement, solid solution
Procedia PDF Downloads 13515004 Implementation of ADETRAN Language Using Message Passing Interface
Authors: Akiyoshi Wakatani
Abstract:
This paper describes the Message Passing Interface (MPI) implementation of ADETRAN language, and its evaluation on SX-ACE supercomputers. ADETRAN language includes pdo statement that specifies the data distribution and parallel computations and pass statement that specifies the redistribution of arrays. Two methods for implementation of pass statement are discussed and the performance evaluation using Splitting-Up CG method is presented. The effectiveness of the parallelization is evaluated and the advantage of one dimensional distribution is empirically confirmed by using the results of experiments.Keywords: iterative methods, array redistribution, translator, distributed memory
Procedia PDF Downloads 26815003 Detection of Flood Prone Areas Using Multi Criteria Evaluation, Geographical Information Systems and Fuzzy Logic. The Ardas Basin Case
Authors: Vasileiou Apostolos, Theodosiou Chrysa, Tsitroulis Ioannis, Maris Fotios
Abstract:
The severity of extreme phenomena is due to their ability to cause severe damage in a small amount of time. It has been observed that floods affect the greatest number of people and induce the biggest damage when compared to the total of annual natural disasters. The detection of potential flood-prone areas constitutes one of the fundamental components of the European Natural Disaster Management Policy, directly connected to the European Directive 2007/60. The aim of the present paper is to develop a new methodology that combines geographical information, fuzzy logic and multi-criteria evaluation methods so that the most vulnerable areas are defined. Therefore, ten factors related to geophysical, morphological, climatological/meteorological and hydrological characteristics of the basin were selected. Afterwards, two models were created to detect the areas pronest to flooding. The first model defined the gravitas of each factor using Analytical Hierarchy Process (AHP) and the final map of possible flood spots were created using GIS and Boolean Algebra. The second model made use of the fuzzy logic and GIS combination and a respective map was created. The application area of the aforementioned methodologies was in Ardas basin due to the frequent and important floods that have taken place these last years. Then, the results were compared to the already observed floods. The result analysis shows that both models can detect with great precision possible flood spots. As the fuzzy logic model is less time-consuming, it is considered the ideal model to apply to other areas. The said results are capable of contributing to the delineation of high risk areas and to the creation of successful management plans dealing with floods.Keywords: analytical hierarchy process, flood prone areas, fuzzy logic, geographic information system
Procedia PDF Downloads 37715002 Systematic and Simple Guidance for Feed Forward Design in Model Predictive Control
Authors: Shukri Dughman, Anthony Rossiter
Abstract:
This paper builds on earlier work which demonstrated that Model Predictive Control (MPC) may give a poor choice of default feed forward compensator. By first demonstrating the impact of future information of target changes on the performance, this paper proposes a pragmatic method for identifying the amount of future information on the target that can be utilised effectively in both finite and infinite horizon algorithms. Numerical illustrations in MATLAB give evidence of the efficacy of the proposal.Keywords: model predictive control, tracking control, advance knowledge, feed forward
Procedia PDF Downloads 54515001 Zinc Oxide Varistor Performance: A 3D Network Model
Authors: Benjamin Kaufmann, Michael Hofstätter, Nadine Raidl, Peter Supancic
Abstract:
ZnO varistors are the leading overvoltage protection elements in today’s electronic industry. Their highly non-linear current-voltage characteristics, very fast response times, good reliability and attractive cost of production are unique in this field. There are challenges and questions unsolved. Especially, the urge to create even smaller, versatile and reliable parts, that fit industry’s demands, brings manufacturers to the limits of their abilities. Although, the varistor effect of sintered ZnO is known since the 1960’s, and a lot of work was done on this field to explain the sudden exponential increase of conductivity, the strict dependency on sinter parameters, as well as the influence of the complex microstructure, is not sufficiently understood. For further enhancement and down-scaling of varistors, a better understanding of the microscopic processes is needed. This work attempts a microscopic approach to investigate ZnO varistor performance. In order to cope with the polycrystalline varistor ceramic and in order to account for all possible current paths through the material, a preferably realistic model of the microstructure was set up in the form of three-dimensional networks where every grain has a constant electric potential, and voltage drop occurs only at the grain boundaries. The electro-thermal workload, depending on different grain size distributions, was investigated as well as the influence of the metal-semiconductor contact between the electrodes and the ZnO grains. A number of experimental methods are used, firstly, to feed the simulations with realistic parameters and, secondly, to verify the obtained results. These methods are: a micro 4-point probes method system (M4PPS) to investigate the current-voltage characteristics between single ZnO grains and between ZnO grains and the metal electrode inside the varistor, micro lock-in infrared thermography (MLIRT) to detect current paths, electron back scattering diffraction and piezoresponse force microscopy to determine grain orientations, atom probe to determine atomic substituents, Kelvin probe force microscopy for investigating grain surface potentials. The simulations showed that, within a critical voltage range, the current flow is localized along paths which represent only a tiny part of the available volume. This effect could be observed via MLIRT. Furthermore, the simulations exhibit that the electric power density, which is inversely proportional to the number of active current paths, since this number determines the electrical active volume, is dependent on the grain size distribution. M4PPS measurements showed that the electrode-grain contacts behave like Schottky diodes and are crucial for asymmetric current path development. Furthermore, evaluation of actual data suggests that current flow is influenced by grain orientations. The present results deepen the knowledge of influencing microscopic factors on ZnO varistor performance and can give some recommendations on fabrication for obtaining more reliable ZnO varistors.Keywords: metal-semiconductor contact, Schottky diode, varistor, zinc oxide
Procedia PDF Downloads 28115000 Managing Inter-Organizational Innovation Project: Systematic Review of Literature
Authors: Lamin B Ceesay, Cecilia Rossignoli
Abstract:
Inter-organizational collaboration is a growing phenomenon in both research and practice. The partnership between organizations enables firms to leverage external resources, experiences, and technology that lie with other firms. This collaborative practice is a source of improved business model performance, technological advancement, and increased competitive advantage for firms. However, the competitive intents, and even diverse institutional logics of firms, make inter-firm innovation-based partnership even more complex, and its governance more challenging. The purpose of this paper is to present a systematic review of research linking the inter-organizational relationship of firms with their innovation practice and specify the different project management issues and gaps addressed in previous research. To do this, we employed a systematic review of the literature on inter-organizational innovation using two complementary scholarly databases - ScienceDirect and Web of Science (WoS). Article scoping relies on the combination of keywords based on similar terms used in the literature:(1) inter-organizational relationship, (2) business network, (3) inter-firm project, and (4) innovation network. These searches were conducted in the title, abstract, and keywords of conceptual and empirical research papers done in English. Our search covers between 2010 to 2019. We applied several exclusion criteria including Papers published outside the years under the review, papers in a language other than English, papers neither listed in WoS nor ScienceDirect and papers that are not sharply related to the inter-organizational innovation-based partnership were removed. After all relevant search criteria were applied, a final list of 84 papers constitutes the data for this review. Our review revealed an increasing evolution of inter-organizational relationship research during the period under the review. The descriptive analysis of papers according to Journal outlets finds that International Journal of Project Management (IJPM), Journal of Industrial Marketing, Journal of Business Research (JBR), etc. are the leading journal outlets for research in the inter-organizational innovation project. The review also finds that Qualitative methods and quantitative approaches respectively are the leading research methods adopted by scholars in the field. However, literature review and conceptual papers constitute the least in the field. During the content analysis of the selected papers, we read the content of each paper and found that the selected papers try to address one of the three phenomena in inter-organizational innovation research: (1) project antecedents; (2) project management and (3) project performance outcomes. We found that these categories are not mutually exclusive, but rather interdependent. This categorization also helped us to organize the fragmented literature in the field. While a significant percentage of the literature discussed project management issues, we found fewer extant literature on project antecedents and performance. As a result of this, we organized the future research agenda addressed in several papers by linking them with the under-researched themes in the field, thus providing great potential to advance future research agenda especially, in the under-researched themes in the field. Finally, our paper reveals that research on inter-organizational innovation project is generally fragmented which hinders a better understanding of the field. Thus, this paper contributes to the understanding of the field by organizing and discussing the extant literature to advance the theory and application of inter-organizational relationship.Keywords: inter-organizational relationship, inter-firm collaboration, innovation projects, project management, systematic review
Procedia PDF Downloads 11014999 Electrochemical Activity of NiCo-GDC Cermet Anode for Solid Oxide Fuel Cells Operated in Methane
Authors: Kamolvara Sirisuksakulchai, Soamwadee Chaianansutcharit, Kazunori Sato
Abstract:
Solid Oxide Fuel Cells (SOFCs) have been considered as one of the most efficient large unit power generators for household and industrial applications. The efficiency of an electronic cell depends mainly on the electrochemical reactions in the anode. The development of anode materials has been intensely studied to achieve higher kinetic rates of redox reactions and lower internal resistance. Recent studies have introduced an efficient cermet (ceramic-metallic) material for its ability in fuel oxidation and oxide conduction. This could expand the reactive site, also known as the triple-phase boundary (TPB), thus increasing the overall performance. In this study, a bimetallic catalyst Ni₀.₇₅Co₀.₂₅Oₓ was combined with Gd₀.₁Ce₀.₉O₁.₉₅ (GDC) to be used as a cermet anode (NiCo-GDC) for an anode-supported type SOFC. The synthesis of Ni₀.₇₅Co₀.₂₅Oₓ was carried out by ball milling NiO and Co3O4 powders in ethanol and calcined at 1000 °C. The Gd₀.₁Ce₀.₉O₁.₉₅ was prepared by a urea co-precipitation method. Precursors of Gd(NO₃)₃·6H₂O and Ce(NO₃)₃·6H₂O were dissolved in distilled water with the addition of urea and were heated subsequently. The heated mixture product was filtered and rinsed thoroughly, then dried and calcined at 800 °C and 1500 °C, respectively. The two powders were combined followed by pelletization and sintering at 1100 °C to form an anode support layer. The fabrications of an electrolyte layer and cathode layer were conducted. The electrochemical performance in H₂ was measured from 800 °C to 600 °C while for CH₄ was from 750 °C to 600 °C. The maximum power density at 750 °C in H₂ was 13% higher than in CH₄. The difference in performance was due to higher polarization resistances confirmed by the impedance spectra. According to the standard enthalpy, the dissociation energy of C-H bonds in CH₄ is slightly higher than the H-H bond H₂. The dissociation of CH₄ could be the cause of resistance within the anode material. The results from lower temperatures showed a descending trend of power density in relevance to the increased polarization resistance. This was due to lowering conductivity when the temperature decreases. The long-term stability was measured at 750 °C in CH₄ monitoring at 12-hour intervals. The maximum power density tends to increase gradually with time while the resistances were maintained. This suggests the enhanced stability from charge transfer activities in doped ceria due to the transition of Ce⁴⁺ ↔ Ce³⁺ at low oxygen partial pressure and high-temperature atmosphere. However, the power density started to drop after 60 h, and the cell potential also dropped from 0.3249 V to 0.2850 V. These phenomena was confirmed by a shifted impedance spectra indicating a higher ohmic resistance. The observation by FESEM and EDX-mapping suggests the degradation due to mass transport of ions in the electrolyte while the anode microstructure was still maintained. In summary, the electrochemical test and stability test for 60 h was achieved by NiCo-GDC cermet anode. Coke deposition was not detected after operation in CH₄, hence this confirms the superior properties of the bimetallic cermet anode over typical Ni-GDC.Keywords: bimetallic catalyst, ceria-based SOFCs, methane oxidation, solid oxide fuel cell
Procedia PDF Downloads 15214998 Single-Molecule Optical Study of Cholesterol-Mediated Dimerization Process of EGFRs in Different Cell Lines
Authors: Chien Y. Lin, Jung Y. Huang, Leu-Wei Lo
Abstract:
A growing body of data reveals that the membrane cholesterol molecules can alter the signaling pathways of living cells. However, the understanding about how membrane cholesterol modulates receptor proteins is still lacking. Single-molecule tracking can effectively probe into the microscopic environments and thermal fluctuations of receptor proteins in a living cell. In this study we applies single-molecule optical tracking on ligand-induced dimerization process of EGFRs in the plasma membranes of two cancer cell lines (HeLa and A431) and one normal endothelial cell line (MCF12A). We tracked individual EGFR and dual receptors, diffusing in a correlated manner in the plasma membranes of live cells. We developed an energetic model by integrating the generalized Langevin equation with the Cahn-Hilliard equation to help extracting important information from single-molecule trajectories. From the study, we discovered that ligand-bound EGFRs move from non-raft areas into lipid raft domains. This ligand-induced motion is a common behavior in both cancer and normal cells. By manipulating the total amount of membrane cholesterol with methyl-β-cyclodextrin and the local concentration of membrane cholesterol with nystatin, we further found that the amount of cholesterol can affect the stability of EGFR dimers. The EGFR dimers in the plasma membrane of normal cells are more sensitive to the local concentration changes of cholesterol than EGFR dimers in the cancer cells. Our method successfully captures dynamic interactions of receptors at the single-molecule level and provides insight into the functional architecture of both the diffusing EGFR molecules and their local cellular environment.Keywords: membrane proteins, single-molecule tracking, Cahn-Hilliard equation, EGFR dimers
Procedia PDF Downloads 41714997 Dielectric Response Analysis Measurement for Diagnostic Oil-Paper Insulation System on Aged Inter Bus Transformer 3x10 MVA
Authors: Eki Farlen, Akas
Abstract:
Condition assessment of oil-paper-insulated power transformers, particularly of water content, is becoming increasingly important for aged transformers. As insulation ages, it can produce water, which reduces its dielectric strength, accelerates the cellulose ageing process, and causes gas bubbles to form at high temperatures. This paper mainly assesses the life condition of oil-paper insulation system of Inter Bus Transformer (IBT) 30 MVA, 150/30 kV in PT PLN-Substation Jelok that has been operating for 41 years, since 1974. Valuable information about the condition of high voltage insulation may be obtained by measuring its dielectric response. This paper describes in detail the interpretation of Dielectric Response Analysis (DIRANA) measurements and the test result compared to other insulation tests to get deep information for diagnostic, such as Tan delta test, oil characteristic test and Dissolve Gas Analysis (DGA) test. This paper mainly discusses the parameter relationship between moisture content, water content, acidity, oil conductivity and dissipation factor. The result and analysis show that IBT 30 MVA Jelok phase U and W had just been ageing due to high acidity level (>0.2 mgKOH/g) which cause high moisture in cellulose/paper (%) are in wet category about 4.7% and 5% and water content in oil (ppm) about 3.13 ppm and 3.33 ppm at temperature 20°C. High acidity level can make oxidation process and produce water in paper and particle which can decrease the value of Interfacial Tension (IFT) below 22 mN/m (poor category) for both phase U and W. Even if paper insulation of transformer are in wet condition, dissipation factor and capacitance at the same frequency (50 Hz) from both measurement DIRANA test and Tangent delta test give the same result (almost), the results are 0.69% and 0.71% (<1%), it may be acceptable and should not be investigated. The DGA results show that TDCG are in level one (1) condition and there are no found a Key Gases, it means that transformers had no failure during operation like arching, partial discharge and thermal in oil or cellulose.Keywords: diagnostic, inter-bus transformer, oil-paper insulation, moisture, dissipation factor
Procedia PDF Downloads 27814996 Detecting Impact of Allowance Trading Behaviors on Distribution of NOx Emission Reductions under the Clean Air Interstate Rule
Authors: Yuanxiaoyue Yang
Abstract:
Emissions trading, or ‘cap-and-trade', has been long promoted by economists as a more cost-effective pollution control approach than traditional performance standard approaches. While there is a large body of empirical evidence for the overall effectiveness of emissions trading, relatively little attention has been paid to other unintended consequences brought by emissions trading. One important consequence is that cap-and-trade could introduce the risk of creating high-level emission concentrations in areas where emitting facilities purchase a large number of emission allowances, which may cause an unequal distribution of environmental benefits. This study will contribute to the current environmental policy literature by linking trading activity with environmental injustice concerns and empirically analyzing the causal relationship between trading activity and emissions reduction under a cap-and-trade program for the first time. To investigate the potential environmental injustice concern in cap-and-trade, this paper uses a differences-in-differences (DID) with instrumental variable method to identify the causal effect of allowance trading behaviors on emission reduction levels under the clean air interstate rule (CAIR), a cap-and-trade program targeting on the power sector in the eastern US. The major data source is the facility-year level emissions and allowance transaction data collected from US EPA air market databases. While polluting facilities from CAIR are the treatment group under our DID identification, we use non-CAIR facilities from the Acid Rain Program - another NOx control program without a trading scheme – as the control group. To isolate the causal effects of trading behaviors on emissions reduction, we also use eligibility for CAIR participation as the instrumental variable. The DID results indicate that the CAIR program was able to reduce NOx emissions from affected facilities by about 10% more than facilities who did not participate in the CAIR program. Therefore, CAIR achieves excellent overall performance in emissions reduction. The IV regression results also indicate that compared with non-CAIR facilities, purchasing emission permits still decreases a CAIR participating facility’s emissions level significantly. This result implies that even buyers under the cap-and-trade program have achieved a great amount of emissions reduction. Therefore, we conclude little evidence of environmental injustice from the CAIR program.Keywords: air pollution, cap-and-trade, emissions trading, environmental justice
Procedia PDF Downloads 14814995 Valorization of Waste and By-products for Protein Extraction and Functional Properties
Authors: Lorena Coelho, David Ramada, Catarina Nobre, Joaquim Gaião, Juliana Duarte
Abstract:
The development of processes that allows the valorization of waste and by-products generated by industries is crucial to promote symbiotic relationships between different sectors and is mandatory to “close the loop” in the circular economy paradigm. In recent years, by-products and waste from agro-food and forestry sector have attracted attention due to their potential application and technical characteristics. The extraction of bio-based active compounds to be reused is in line with the circular bioeconomy concept trends, combining the use of renewable resources with the process’s circularity, aiming the waste reduction and encouraging reuse and recycling. Among different types of bio-based materials, which are being explored and can be extracted, proteins fractions are becoming an attractive new raw material. Within this context, BioTrace4Leather project, a collaboration between two Technological Centres – CeNTI and CTIC, and a company of Tanning and Finishing of Leather – Curtumes Aveneda, aims to develop innovative and biologically sustainable solutions for leather industry and accomplish the market circularity trends. Specifically, it aims to the valorisation of waste and by-products from the tannery industry through proteins extraction and the development of an innovative and biologically sustainable materials. The achieved results show that keratin, gelatine, and collagen fractions can be successfully extracted from hair and leather bovine waste. These products could be reintegrated into the industrial manufacturing process to attain innovative and functional textile and leather substrates. ACKNOWLEDGEMENT This work has been developed under BioTrace4Leather scope, a project co-funded by Operational Program for Competitiveness and Internationalization (COMPETE) of PORTUGAL2020, through the European Regional Development Fund (ERDF), under grant agreement Nº POCI-01-0247-FEDER-039867.Keywords: leather by-products, circular economy, sustainability, protein fractions
Procedia PDF Downloads 15614994 Improved Technology Portfolio Management via Sustainability Analysis
Authors: Ali Al-Shehri, Abdulaziz Al-Qasim, Abdulkarim Sofi, Ali Yousef
Abstract:
The oil and gas industry has played a major role in improving the prosperity of mankind and driving the world economy. According to the International Energy Agency (IEA) and Integrated Environmental Assessment (EIA) estimates, the world will continue to rely heavily on hydrocarbons for decades to come. This growing energy demand mandates taking sustainability measures to prolong the availability of reliable and affordable energy sources, and ensure lowering its environmental impact. Unlike any other industry, the oil and gas upstream operations are energy-intensive and scattered over large zonal areas. These challenging conditions require unique sustainability solutions. In recent years there has been a concerted effort by the oil and gas industry to develop and deploy innovative technologies to: maximize efficiency, reduce carbon footprint, reduce CO2 emissions, and optimize resources and material consumption. In the past, the main driver for research and development (R&D) in the exploration and production sector was primarily driven by maximizing profit through higher hydrocarbon recovery and new discoveries. Environmental-friendly and sustainable technologies are increasingly being deployed to balance sustainability and profitability. Analyzing technology and its sustainability impact is increasingly being used in corporate decision-making for improved portfolio management and allocating valuable resources toward technology R&D.This paper articulates and discusses a novel workflow to identify strategic sustainable technologies for improved portfolio management by addressing existing and future upstream challenges. It uses a systematic approach that relies on sustainability key performance indicators (KPI’s) including energy efficiency quotient, carbon footprint, and CO2 emissions. The paper provides examples of various technologies including CCS, reducing water cuts, automation, using renewables, energy efficiency, etc. The use of 4IR technologies such as Artificial Intelligence, Machine Learning, and Data Analytics are also discussed. Overlapping technologies, areas of collaboration and synergistic relationships are identified. The unique sustainability analyses provide improved decision-making on technology portfolio management.Keywords: sustainability, oil& gas, technology portfolio, key performance indicator
Procedia PDF Downloads 18114993 The Study of ZigBee Protocol Application in Wireless Networks
Authors: Ardavan Zamanpour, Somaieh Yassari
Abstract:
ZigBee protocol network was developed in industries and MIT laboratory in 1997. ZigBee is a wireless networking technology by alliance ZigBee which is designed to low board and low data rate applications. It is a Protocol which connects between electrical devises with very low energy and cost. The first version of IEEE 802.15.4 which was formed ZigBee was based on 2.4GHZ MHZ 912MHZ 868 frequency band. The name of system is often reminded random directions that bees (BEES) traversing during pollination of products. Such as alloy of the ways in which information packets are traversed within the mesh network. This paper aims to study the performance and effectiveness of this protocol in wireless networks.Keywords: ZigBee, protocol, wireless, networks
Procedia PDF Downloads 36814992 Cryptocurrency-Based Mobile Payments with Near-Field Communication-Enabled Devices
Authors: Marko Niinimaki
Abstract:
Cryptocurrencies are getting increasingly popular, but very few of them can be conveniently used in daily mobile phone purchases. To solve this problem, we demonstrate how to build a functional prototype of a mobile cryptocurrency-based e-commerce application the communicates with Near-Field Communication (NFC) tags. Using the system, users are able to purchase physical items with an NFC tag that contains an e-commerce URL. The payment is done simply by touching the tag with a mobile device and accepting the payment. Our method is constructive: we describe the design and technologies used in the implementation and evaluate the security and performance of the solution. Our main finding is that the analysis and measurements show that our solution is feasible for e-commerce.Keywords: cryptocurrency, e-commerce, NFC, mobile devices
Procedia PDF Downloads 18114991 Rubric in Vocational Education
Authors: Azmanirah Ab Rahman, Jamil Ahmad, Ruhizan Muhammad Yasin
Abstract:
Rubric is a very important tool for teachers and students for a variety of purposes. Teachers use the rubric for evaluating student work while students use rubrics for self-assessment. Therefore, this paper was emphasized scoring rubric as a scoring tool for teachers in an environment of Competency Based Education and Training (CBET) in Malaysia Vocational College. A total of three teachers in the fields of electrical and electronics engineering were interviewed to identify how the use of rubrics practiced since vocational transformation implemented in 2012. Overall holistic rubric used to determine the performance of students in the skills area.Keywords: rubric, vocational education, teachers, CBET
Procedia PDF Downloads 50514990 Multi-Criterial Analysis: Potential Regions and Height of Wind Turbines, Rio de Janeiro, Brazil
Authors: Claudio L. M. Souza, Milton Erthal, Aldo Shimoya, Elias R. Goncalves, Igor C. Rangel, Allysson R. T. Tavares, Elias G. Figueira
Abstract:
The process of choosing a region for the implementation of wind farms involves factors such as the wind regime, economic viability, land value, topography, and accessibility. This work presents results obtained by multi-criteria decision analysis, and it establishes a hierarchy, regarding the installation of wind farms, among geopolicy regions in the state of ‘Rio de Janeiro’, Brazil: ‘Regiao Norte-RN’, ‘Regiao dos Lagos-RL’ and ‘Regiao Serrana-RS’. The wind regime map indicates only these three possible regions with an average annual wind speed of above of 6.0 m/s. The method applied was the Analytical Hierarchy Process-AHP, designed to prioritize and rank the three regions based on four criteria as follows: 1) potential of the site and average wind speeds of above 6.0 ms-¹, 2) average land value, 3) distribution and interconnection to electric network with the highest number of electricity stations, and 4) accessibility with proximity and quality of highways and flat topography. The values of energy generation were calculated for wind turbines 50, 75, and 100 meters high, considering the production of site (GWh/Km²) and annual production (GWh). The weight of each criterion was attributed by six engineers and by analysis of Road Map, the Map of the Electric System, the Map of Wind Regime and the Annual Land Value Report. The results indicated that in 'RS', the demand was estimated at 2,000 GWh, so a wind farm can operate efficiently in 50 m turbines. This region is mainly mountainous with difficult access and lower land value. With respect to ‘RL’, the wind turbines have to be installed at a height of 75 m high to reach a demand of 6,300 GWh. This region is very flat, with easy access, and low land value. Finally, the ‘NR’ was evaluated as very flat and with expensive lands. In this case, wind turbines with 100 m can reach an annual production of 19,000 GWh. In this Region, the coast area was classified as of greater logistic, productivity and economic potential.Keywords: AHP, renewable energy, wind energy
Procedia PDF Downloads 14914989 Argumentation Frameworks and Theories of Judging
Authors: Sonia Anand Knowlton
Abstract:
With the rise of artificial intelligence, computer science is becoming increasingly integrated in virtually every area of life. Of course, the law is no exception. Through argumentation frameworks (AFs), computer scientists have used abstract algebra to structure the legal reasoning process in a way that allows conclusions to be drawn from a formalized system of arguments. In AFs, arguments compete against each other for logical success and are related to one another through the binary operation of the attack. The prevailing arguments make up the preferred extension of the given argumentation framework, telling us what set of arguments must be accepted from a logical standpoint. There have been several developments of AFs since its original conception in the early 90’s in efforts to make them more aligned with the human reasoning process. Generally, these developments have sought to add nuance to the factors that influence the logical success of competing arguments (e.g., giving an argument more logical strength based on the underlying value it promotes). The most cogent development was that of the Extended Argumentation Framework (EAF), in which attacks can themselves be attacked by other arguments, and the promotion of different competing values can be formalized within the system. This article applies the logical structure of EAFs to current theoretical understandings of judicial reasoning to contribute to theories of judging and to the evolution of AFs simultaneously. The argument is that the main limitation of EAFs, when applied to judicial reasoning, is that they require judges to themselves assign values to different arguments and then lexically order these values to determine the given framework’s preferred extension. Drawing on John Rawls’ Theory of Justice, the examination that follows is whether values are lexical and commensurable to this extent. The analysis that follows then suggests a potential extension of the EAF system with an approach that formalizes different “planes of attack” for competing arguments that promote lexically ordered values. This article concludes with a summary of how these insights contribute to theories of judging and of legal reasoning more broadly, specifically in indeterminate cases where judges must turn to value-based approaches.Keywords: computer science, mathematics, law, legal theory, judging
Procedia PDF Downloads 5914988 Outcomes-Based Qualification Design and Vocational Subject Literacies: How Compositional Fallacy Short-Changes School-Leavers’ Literacy Development
Authors: Rose Veitch
Abstract:
Learning outcomes-based qualifications have been heralded as the means to raise vocational education and training (VET) standards, meet the needs of the changing workforce, and establish equivalence with existing academic qualifications. Characterized by explicit, measurable performance statements and atomistically specified assessment criteria, the outcomes model has been adopted by many VET systems worldwide since its inception in the United Kingdom in the 1980s. Debate to date centers on how the outcomes model treats knowledge. Flaws have been identified in terms of the overemphasis of end-points, neglect of process and a failure to treat curricula coherently. However, much of this censure has evaluated the outcomes model from a theoretical perspective; to date, there has been scant empirical research to support these criticisms. Various issues therefore remain unaddressed. This study investigates how the outcomes model impacts the teaching of subject literacies. This is of particular concern for subjects on the academic-vocational boundary such as Business Studies, since many of these students progress to higher education in the United Kingdom. This study also explores the extent to which the outcomes model is compatible with borderline vocational subjects. To fully understand if this qualification model is fit for purpose in the 16-18 year-old phase, it is necessary to investigate how teachers interpret their qualification specifications in terms of curriculum, pedagogy and assessment. Of particular concern is the nature of the interaction between the outcomes model and teachers’ understandings of their subject-procedural knowledge, and how this affects their capacity to embed literacy into their teaching. This present study is part of a broader doctoral research project which seeks to understand if and how content-area, disciplinary literacy and genre approaches can be adapted to outcomes-based VET qualifications. This qualitative research investigates the ‘what’ and ‘how’ of literacy embedding from the perspective of in-service teacher development in the 16-18 phase of education. Using ethnographic approaches, it is based on fieldwork carried out in one Further Education college in the United Kingdom. Emergent findings suggest that the outcomes model is not fit for purpose in the context of borderline vocational subjects. It is argued that the outcomes model produces inferior qualifications due to compositional fallacy; the sum of a subject’s components do not add up to the whole. Findings indicate that procedural knowledge, largely unspecified by some outcomes-based qualifications, is where subject-literacies are situated, and that this often gets lost in ‘delivery’. It seems that the outcomes model provokes an atomistic treatment of knowledge amongst teachers, along with the privileging of propositional knowledge over procedural knowledge. In other words, outcomes-based VET is a hostile environment for subject-literacy embedding. It is hoped that this research will produce useful suggestions for how this problem can be ameliorated, and will provide an empirical basis for the potential reforms required to address these issues in vocational education.Keywords: literacy, outcomes-based, qualification design, vocational education
Procedia PDF Downloads 814987 Anomalies of Visual Perceptual Skills Amongst School Children in Foundation Phase in Olievenhoutbosch, Gauteng Province, South Africa
Authors: Maria Bonolo Mathevula
Abstract:
Background: Children are important members of communities playing major role in the future of any given country (Pera, Fails, Gelsomini, &Garzotto, 2018). Visual Perceptual Skills (VPSs) in children are important health aspect of early childhood development through the Foundation Phases in school. Subsequently, children should undergo visual screening before commencement of schooling for early diagnosis ofVPSs anomalies because the primary role of VPSs is to capacitate children with academic performance in general. Aim : The aim of this study was to determine the anomalies of visual VPSs amongst school children in Foundation Phase. The study’s objectives were to determine the prevalence of VPSs anomalies amongst school children in Foundation Phase; Determine the relationship between children’s academic and VPSs anomalies; and to investigate the relationship between VPSs anomalies and refractive error. Methodology: This study was a mixed method whereby triangulated qualitative (interviews) and quantitative (questionnaire and clinical data) was used. This was, therefore, descriptive by nature. The study’s target population was school children in Foundation Phase. The study followed purposive sampling method. School children in Foundation Phase were purposively sampled to form part of this study provided their parents have given a signed the consent. Data was collected by the use of standardized interviews; questionnaire; clinical data card, and TVPS standard data card. Results: Although the study is still ongoing, the preliminary study outcome based on data collected from one of the Foundation Phases have suggested the following:While VPSs anomalies is not prevalent, it, however, have indirect relationship with children’s academic performance in Foundation phase; Notably, VPSs anomalies and refractive error are directly related since majority of children with refractive error, specifically compound hyperopic astigmatism, failed most subtests of TVPS standard tests. Conclusion: Based on the study’s preliminary findings, it was clear that optometrists still have a lot to do in as far as researching on VPSs is concerned. Furthermore, the researcher recommends that optometrist, as the primary healthcare professionals, should also conduct the school-readiness pre-assessment on children before commencement of their grades in Foundation phase.Keywords: foundation phase, visual perceptual skills, school children, refractive error
Procedia PDF Downloads 10014986 Development of a Decision Model to Optimize Total Cost in Food Supply Chain
Authors: Henry Lau, Dilupa Nakandala, Li Zhao
Abstract:
All along the length of the supply chain, fresh food firms face the challenge of managing both product quality, due to the perishable nature of the products, and product cost. This paper develops a method to assist logistics managers upstream in the fresh food supply chain in making cost optimized decisions regarding transportation, with the objective of minimizing the total cost while maintaining the quality of food products above acceptable levels. Considering the case of multiple fresh food products collected from multiple farms being transported to a warehouse or a retailer, this study develops a total cost model that includes various costs incurred during transportation. The practical application of the model is illustrated by using several computational intelligence approaches including Genetic Algorithms (GA), Fuzzy Genetic Algorithms (FGA) as well as an improved Simulated Annealing (SA) procedure applied with a repair mechanism for efficiency benchmarking. We demonstrate the practical viability of these approaches by using a simulation study based on pertinent data and evaluate the simulation outcomes. The application of the proposed total cost model was demonstrated using three approaches of GA, FGA and SA with a repair mechanism. All three approaches are adoptable; however, based on the performance evaluation, it was evident that the FGA is more likely to produce a better performance than the other two approaches of GA and SA. This study provides a pragmatic approach for supporting logistics and supply chain practitioners in fresh food industry in making important decisions on the arrangements and procedures related to the transportation of multiple fresh food products to a warehouse from multiple farms in a cost-effective way without compromising product quality. This study extends the literature on cold supply chain management by investigating cost and quality optimization in a multi-product scenario from farms to a retailer and, minimizing cost by managing the quality above expected quality levels at delivery. The scalability of the proposed generic function enables the application to alternative situations in practice such as different storage environments and transportation conditions.Keywords: cost optimization, food supply chain, fuzzy sets, genetic algorithms, product quality, transportation
Procedia PDF Downloads 22314985 Sensory and Microbiological Sustainability of Smoked Meat Products–Smoked Ham in Order to Determine the Shelf-Life under the Changed Conditions at +15°C
Authors: Radovan Čobanović, Milica Rankov Šicar
Abstract:
The meat is in the group of perishable food which can be spoiled very rapidly if stored at room temperature. Salting in combination with smoke is intended to extend shelf life, and also to form the specific taste, odor and color. The smoke do not affect only on taste and flavor of the product, it has a bactericidal and oxidative effect and that is the reason because smoked products are less susceptible to oxidation and decay processes. According to mentioned the goal of this study was to evaluate shelf life of smoked ham, which is stored in conditions of high temperature (+15 °C). For the purposes of this study analyzes were conducted on eight samples of smoked ham every 7th day from the day of reception until 21st day. During this period, smoked ham is subjected to sensory analysis (appearance, odor, taste, color, aroma) and bacteriological analyzes (Listeria monocytogenes, Salmonella spp. and yeasts and molds) according to Serbian state regulation. All analyses were tested according to ISO methodology: sensory analysis ISO 6658, Listeria monocytogenes ISO 11 290-1, Salmonella spp ISO 6579 and yeasts and molds ISO 21527-2. Results of sensory analysis of smoked ham indicating that the samples after the first seven days of storage showed visual changes at the surface in the form of allocations of salt, most likely due to the process of drying out the internal parts of the product. The sample, after fifteen days of storage had intensive exterior changes, but the taste was still acceptable. Between the fifteenth and twenty-first day of storage, there is an unacceptable change on the surface and inside of the product and the occurrence of molds and yeasts but neither one analyzed pathogen was found. Based on the obtained results it can be concluded that this type of product cannot be stored for more than seven days at an elevated temperature of +15°C because there are a visual changes that would certainly have influence on decision of customers when purchase of this product is concerned.Keywords: sustainability, smoked meat products, food engineering, agricultural process engineering
Procedia PDF Downloads 35914984 Impact of Different Ripening Accelerators on the Microbial Load and Proximate Composition of Plantain (Musa paradisiaca) and Banana (Musa sapientum), during the Ripening Process, and the Nutrition Implication for Food Security
Authors: Wisdom Robert Duruji, Oluwasegun Christopher Akinleye
Abstract:
This study reports on the impact of different ripening accelerators on the microbial load and proximate composition of plantain (Musa paradisiaca) and Banana (Musa sapientum) during the ripening process, and the nutrition implication for food security. The study comprised of four treatments, namely: Calcium carbide, Irvingia gabonensis fruits, Newbouldia laevis leaves and a control, where no ripening accelerator was applied to the fingers of plantain and banana. The unripe and ripened plantain and banana were subjected to microbial analysis by isolating and enumerating their micro flora using pour plate method; and also, their proximate composition was determined using standard methods. The result indicated that the bacteria count of plantain increased from 3.25 ± 0.33 for unripe to 5.31 ± 0.30 log cfu/g for (treated) ripened, and that of banana increased from 3.69 ± 0.11 for unripe to 5.26 ± 0.21 log cfu/g for ripened. Also, the fungal count of plantain increased from 3.20 ± 0.16 for unripe to 4.88 ± 0.22 log sfu/g for ripened; and that of banana increased from 3.61 ± 0.19 for unripe to 5.43 ± 0.26 for ripened. Ripened plantain fingers without any ripening accelerator (control) had significantly (p < 0.05) higher values of crude protein 3.56 ± 0.06%, crude fat 0.42 ± 0.04%, total ash 2.74 ± 0.15 and carbohydrate 31.10 ± 0.20; but with significantly lower value of moisture 62.14 ± 0.07% when compared with treated plantain. The proximate composition trend of treated and banana fingers control is similar to that of treated and plantain control, except that higher moisture content of 75.11 ± 0.07% and lesser protein, crude fat, total ash and carbohydrate were obtained from treated and ripened banana control when the treatments were compared with that of plantain. The study concluded that plantain is more nutritious (mealy) than a banana; also, the ripening accelerators increased the microbial load and reduced the nutritional status of plantain and banana.Keywords: food nutrition, calcium carbide, rvingia gabonensis, newbouldia laevis, plantain, banana
Procedia PDF Downloads 32114983 Recycling, Reuse and Reintegration of Steel Plant Fines
Authors: R. K. Agrawal, Shiv Agrawal
Abstract:
Fines and micro create fundamental problems of respiration. From mines to mills steel plants generate lot of pollutants. Legislation & Government laws are stricter day by day & each plant has to think of recycling, reuse &reintegration of pollutants generated during the process of steel making. This paper deals with experiments conducted in Bhilai Steel Plant and Real Ispat and Power Limited for reuse, recycle & reintegrate some of the steel making process fines. Iron ore fines with binders have been agglomerated to be used as a part of the charge for small furnaces. This will improve yield at nominal cost. Rolling mill fines have been recycled to increase the yield of sinter making. This will solve the problems of fine disposal. Huge saving on account of recycling will be achieved. Lime fines after briquetting is used along with prime lime. Lime fines have also been used as a binding material during production of fly ash bricks. These fines serve as low-cost binder. Experiments have been conducted along with coke breeze & gas cleaning plant sludge. As a result, the anti-sloping compound has been developed for converter vessels. Dolo char and Char during Sponge Iron production have been successfully used in power generation and brick making. Pellets have been made with ventilation dust & flue dust. These samples have been tried as a coolant in the converter. Pellets have been made with Sinter Plant electrostatic precipitator micro fines with liquid binder. Trials have been conducted to reuse these pellets in sinter making. Coke breeze from coke-ovens fines and mill scale along with binders were agglomerated. This was used in furnace after attaining required screening and reactivity index. These actions will definitely bring social, economic and environment-friendly universe.Keywords: briquette, dolo char, electrostatic precipitator, pellet, sinter
Procedia PDF Downloads 38914982 Nanomaterials-Assisted Drilling Fluids for Application in Oil Fields - Challenges and Prospects
Authors: Husam Mohammed Saleh Alziyadi
Abstract:
The drilling fluid has a significant impact on drilling efficiency. Drilling fluids have several functions which make them most important within the drilling process, such as lubricating and cooling the drill bit, removing cuttings from down of hole, preventing formation damage, suspending drill bit cuttings, , and also removing permeable formation as a result, the flow of fluid into the formation process is delayed. In the oil and gas sector, unconventional shale reserves have been a central player in meeting world energy demands. Oil-based drilling fluids (OBM) are generally favored for drilling shale plays due to negligible chemical interactions. Nevertheless, the industry has been inspired by strict environmental regulations to design water-based drilling fluids (WBM) capable of regulating shale-water interactions to boost their efficiency. However, traditional additives are too large to plug the micro-fractures and nanopores of the shale. Recently, nanotechnology in the oil and gas industries has shown a lot of promise, especially with drilling fluids based on nanoparticles. Nanotechnology has already made a huge contribution to technical developments in the energy sector. In the drilling industry, nanotechnology can make revolutionary changes. Nanotechnology creates nanomaterials with many attractive properties that can play an important role in improving the consistency of mud cake, reducing friction, preventing differential pipe sticking, preserving the stability of the borehole, protecting reservoirs, and improving the recovery of oil and gas. The selection of suitable nanomaterials should be based on the shale formation characteristics intended for drilling. The size, concentration, and stability of the NPs are three more important considerations. The effects of the environment are highly sensitive to these materials, such as changes in ionic strength, temperature, or pH, all of which occur under downhole conditions. This review paper focused on the previous research and recent development of environmentally friendly drilling fluids according to the regulatory environment and cost challenges.Keywords: nanotechnology, WBM, Drilling Fluid, nanofluids
Procedia PDF Downloads 12414981 Methylphenidate and Placebo Effect on Brain Activity and Basketball Free Throw: A Randomized Controlled Trial
Authors: Mohammad Khazaei, Reza Rostami, Hasan Gharayagh Zandi, Rouhollah Basatnia, Mahbubeh Ghayour Najafabadi
Abstract:
Objective: Methylphenidate has been demonstrated to enhance attention and cognitive processes, and placebo treatments have also been found to improve attention and cognitive processes. Additionally, methylphenidate may have positive effects on motion perception and sports performance. Nevertheless, additional research is needed to fully comprehend the neural mechanisms underlying the effects of methylphenidate and placebo on cognitive and motor functions. Methods: In this randomized controlled trial, 18 young semi-professional basketball players aged 18-23 years were randomly and equally assigned to either a Ritalin or Placebo group. The participants performed 20 consecutive free throws; their scores were recorded on a 0-3 scale. The participants’ brain activity was recorded using electroencephalography (EEG) for 5 minutes seated with their eyes closed. The Ritalin group received a 10 mg dose of methylphenidate, while the Placebo group received a 10mg dose of placebo. The EEG was obtained 90 minutes after the drug was administere Results: There was no significant difference in the absolute power of brain waves between the pre-test and post-tests in the Placebo group. However, in the Ritalin group, a significant difference in the absolute power of brain waves was observed in the Theta band (5-6 Hz) and Beta band (21-30 Hz) between pre- and post-tests in Fp2, F8, and Fp1. In these areas, the absolute power of Beta waves was higher during the post-test than during the pre-test. The Placebo group showed a more significant difference in free throw scores than the Ritalin group. Conclusions: In conclusion, these results suggest that Ritalin effect on brain activity in areas associated with attention and cognitive processes, as well as improve basketball free throws. However, there was no significant placebo effect on brain activity performance, but it significantly affected the improvement of free throws. Further research is needed to fully understand the effects of methylphenidate and placebo on cognitive and motor functions.Keywords: methylphenidate, placebo effect, electroencephalography, basketball free throw
Procedia PDF Downloads 7914980 Developing NAND Flash-Memory SSD-Based File System Design
Authors: Jaechun No
Abstract:
This paper focuses on I/O optimizations of N-hybrid (New-Form of hybrid), which provides a hybrid file system space constructed on SSD and HDD. Although the promising potentials of SSD, such as the absence of mechanical moving overhead and high random I/O throughput, have drawn a lot of attentions from IT enterprises, its high ratio of cost/capacity makes it less desirable to build a large-scale data storage subsystem composed of only SSDs. In this paper, we present N-hybrid that attempts to integrate the strengths of SSD and HDD, to offer a single, large hybrid file system space. Several experiments were conducted to verify the performance of N-hybrid.Keywords: SSD, data section, I/O optimizations, hybrid system
Procedia PDF Downloads 41614979 Investigation of Delivery of Triple Play Service in GE-PON Fiber to the Home Network
Authors: Anurag Sharma, Dinesh Kumar, Rahul Malhotra, Manoj Kumar
Abstract:
Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT
Procedia PDF Downloads 732