Search results for: commercialization in business of industry of the Europe and Ukraine
458 Analysis of Unconditional Conservatism and Earnings Quality before and after the IFRS Adoption
Authors: Monica Santi, Evita Puspitasari
Abstract:
International Financial Reporting Standard (IFRS) has developed the principle based accounting standard. Based on this, IASB then eliminated the conservatism concept within accounting framework. Conservatism concept represents a prudent reaction to uncertainty to try to ensure that uncertainties and risk inherent in business situations are adequately considered. The conservatism concept has two ingredients: conditional conservatism or ex-post (news depending prudence) and unconditional conservatism or ex-ante (news-independent prudence). IFRS in substance disregards the unconditional conservatism because the unconditional conservatism can cause the understatement assets or overstated liabilities, and eventually the financial statement would be irrelevance since the information does not represent the real fact. Therefore, the IASB eliminate the conservatism concept. However, it does not decrease the practice of unconditional conservatism in the financial statement reporting. Therefore, we expected the earnings quality would be affected because of this situation, even though the IFRS implementation was expected to increase the earnings quality. The objective of this study was to provide empirical findings about the unconditional conservatism and the earnings quality before and after the IFRS adoption. The earnings per accrual measure were used as the proxy for the unconditional conservatism. If the earnings per accrual were negative (positive), it meant the company was classified as the conservative (not conservative). The earnings quality was defined as the ability of the earnings in reflecting the future earnings by considering the earnings persistence and stability. We used the earnings response coefficient (ERC) as the proxy for the earnings quality. ERC measured the extant of a security’s abnormal market return in response to the unexpected component of reporting earning of the firm issuing that security. The higher ERC indicated the higher earnings quality. The manufacturing companies listed in the Indonesian Stock Exchange (IDX) were used as the sample companies, and the 2009-2010 period was used to represent the condition before the IFRS adoption, and 2011-2013 was used to represent the condition after the IFRS adoption. Data was analyzed using the Mann-Whitney test and regression analysis. We used the firm size as the control variable with the consideration the firm size would affect the earnings quality of the company. This study had proved that the unconditional conservatism had not changed, either before and after the IFRS adoption period. However, we found the different findings for the earnings quality. The earnings quality had decreased after the IFRS adoption period. This empirical results implied that the earnings quality before the IFRS adoption was higher. This study also had found that the unconditional conservatism positively influenced the earnings quality insignificantly. The findings implied that the implementation of the IFRS had not decreased the unconditional conservatism practice and has not altered the earnings quality of the manufacturing company. Further, we found that the unconditional conservatism did not affect the earnings quality. Eventhough the empirical result shows that the unconditional conservatism gave positive influence to the earnings quality, but the influence was not significant. Thus, we concluded that the implementation of the IFRS did not increase the earnings quality.Keywords: earnings quality, earnings response coefficient, IFRS Adoption, unconditional conservatism
Procedia PDF Downloads 261457 Prevalence of Work-Related Musculoskeletal Disorder among Dental Personnel in Perak
Authors: Nursyafiq Ali Shibramulisi, Nor Farah Fauzi, Nur Azniza Zawin Anuar, Nurul Atikah Azmi, Janice Hew Pei Fang
Abstract:
Background: Work related musculoskeletal disorders (WRMD) among dental personnel have been underestimated and under-reported worldwide and specifically in Malaysia. The problem will arise and progress slowly over time, as it results from accumulated injury throughout the period of work. Several risk factors, such as repetitive movement, static posture, vibration, and adapting poor working postures, have been identified to be contributing to WRMSD in dental practices. Dental personnel is at higher risk of getting this problem as it is their working nature and core business. This would cause pain and dysfunction syndrome among them and result in absence from work and substandard services to their patients. Methodology: A cross-sectional study involving 19 government dental clinics in Perak was done over the period of 3 months. Those who met the criteria were selected to participate in this study. Malay version of the Self-Reported Nordic Musculoskeletal Discomfort Form was used to identify the prevalence of WRMSD, while the intensity of pain in the respective regions was evaluated using a 10-point scale according to ‘Pain as The 5ᵗʰ Vital Sign’ by MOH Malaysia and later on were analyzed using SPSS version 25. Descriptive statistics, including mean and SD and median and IQR, were used for numerical data. Categorical data were described by percentage. Pearson’s Chi-Square Test and Spearman’s Correlation were used to find the association between the prevalence of WRMSD and other socio-demographic data. Results: 159 dentists, 73 dental therapists, 26 dental lab technicians, 81 dental surgery assistants, and 23 dental attendants participated in this study. The mean age for the participants was 34.9±7.4 and their mean years of service was 9.97±7.5. Most of them were female (78.5%), Malay (71.3%), married (69.6%) and right-handed (90.1%). The highest prevalence of WRMSD was neck (58.0%), followed by shoulder (48.1%), upper back (42.0%), lower back (40.6%), hand/wrist (31.5%), feet (21.3%), knee (12.2%), thigh 7.7%) and lastly elbow (6.9%). Most of those who reported having neck pain scaled their pain experiences at 2 out of 10 (19.5%), while for those who suffered upper back discomfort, most of them scaled their pain experience at 6 out of 10 (17.8%). It was found that there was a significant relationship between age and pain at neck (p=0.007), elbow (p=0.027), lower back (p=0.032), thigh (p=0.039), knee (p=0.001) and feet (p=0.000) regions. Job position also had been found to be having a significant relationship with pain experienced at the lower back (p=0.018), thigh (p=0.011), knee, and feet (p=0.000). Conclusion: The prevalence of WRMSD among dental personnel in Perak was found to be high. Age and job position were found to be having a significant relationship with pain experienced in several regions. Intervention programs should be planned and conducted to prevent and reduce the occurrence of WRMSD, as all harmful or unergonomic practices should be avoided at all costs.Keywords: WRMSD, ergonomic, dentistry, dental
Procedia PDF Downloads 88456 Analysis of Interparticle interactions in High Waxy-Heavy Clay Fine Sands for Sand Control Optimization
Authors: Gerald Gwamba
Abstract:
Formation and oil well sand production is one of the greatest and oldest concerns for the Oil and gas industry. The production of sand particles may vary from very small and limited amounts to far elevated levels which has the potential to block or plug the pore spaces near the perforated points to blocking production from surface facilities. Therefore, the timely and reliable investigation of conditions leading to the onset or quantifying sanding while producing is imperative. The challenges of sand production are even more elevated while producing in Waxy and Heavy wells with Clay Fine sands (WHFC). Existing research argues that both waxy and heavy hydrocarbons exhibit far differing characteristics with waxy more paraffinic while heavy crude oils exhibit more asphaltenic properties. Moreover, the combined effect of WHFC conditions presents more complexity in production as opposed to individual effects that could be attributed to a consolidation of a surmountable opposing force. However, research on a combined high WHFC system could depict a better representation of the surmountable effect which in essence is more comparable to field conditions where a one-sided view of either individual effects on sanding has been argued to some extent misrepresentative of actual field conditions since all factors act surmountably. In recognition of the limited customized research on sand production studies with the combined effect of WHFC however, our research seeks to apply the Design of Experiments (DOE) methodology based on latest literature to analyze the relationship between various interparticle factors in relation to selected sand control methods. Our research aims to unearth a better understanding of how the combined effect of interparticle factors including: strength, cementation, particle size and production rate among others could better assist in the design of an optimal sand control system for the WHFC well conditions. In this regard, we seek to answer the following research question: How does the combined effect of interparticle factors affect the optimization of sand control systems for WHFC wells? Results from experimental data collection will inform a better justification for a sand control design for WHFC. In doing so, we hope to contribute to earlier contrasts arguing that sand production could potentially enable well self-permeability enhancement caused by the establishment of new flow channels created by loosening and detachment of sand grains. We hope that our research will contribute to future sand control designs capable of adapting to flexible production adjustments in controlled sand management. This paper presents results which are part of an ongoing research towards the authors' PhD project in the optimization of sand control systems for WHFC wells.Keywords: waxy-heavy oils, clay-fine sands, sand control optimization, interparticle factors, design of experiments
Procedia PDF Downloads 133455 The Superior Performance of Investment Bank-Affiliated Mutual Funds
Authors: Michelo Obrey
Abstract:
Traditionally, mutual funds have long been esteemed as stand-alone entities in the U.S. However, the prevalence of the fund families’ affiliation to financial conglomerates is eroding this striking feature. Mutual fund families' affiliation with financial conglomerates can potentially be an important source of superior performance or cost to the affiliated mutual fund investors. On the one hand, financial conglomerates affiliation offers the mutual funds access to abundant resources, better research quality, private material information, and business connections within the financial group. On the other hand, conflict of interest is bound to arise between the financial conglomerate relationship and fund management. Using a sample of U.S. domestic equity mutual funds from 1994 to 2017, this paper examines whether fund family affiliation to an investment bank help the affiliated mutual funds deliver superior performance through private material information advantage possessed by the investment banks or it costs affiliated mutual fund shareholders due to the conflict of interest. Robust to alternative risk adjustments and cross-section regression methodologies, this paper finds that the investment bank-affiliated mutual funds significantly outperform those of the mutual funds that are not affiliated with an investment bank. Interestingly the paper finds that the outperformance is confined to holding return, a return measure that captures the investment talent that is uninfluenced by transaction costs, fees, and other expenses. Further analysis shows that the investment bank-affiliated mutual funds specialize in hard-to-value stocks, which are not more likely to be held by unaffiliated funds. Consistent with the information advantage hypothesis, the paper finds that affiliated funds holding covered stocks outperform affiliated funds without covered stocks lending no support to the hypothesis that affiliated mutual funds attract superior stock-picking talent. Overall, the paper findings are consistent with the idea that investment banks maximize fee income by monopolistically exploiting their private information, thus strategically transferring performance to their affiliated mutual funds. This paper contributes to the extant literature on the agency problem in mutual fund families. It adds to this stream of research by showing that the agency problem is not only prevalent in fund families but also in financial organizations such as investment banks that have affiliated mutual fund families. The results show evidence of exploitation of synergies such as private material information sharing that benefit mutual fund investors due to affiliation with a financial conglomerate. However, this research has a normative dimension, allowing such incestuous behavior of insider trading and exploitation of superior information not only negatively affect the unaffiliated fund investors but also led to an unfair and unleveled playing field in the financial market.Keywords: mutual fund performance, conflicts of interest, informational advantage, investment bank
Procedia PDF Downloads 191454 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data
Authors: Sara Bonetti
Abstract:
The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.Keywords: data literacy, early childhood professionals, intersectionality, quantitative data
Procedia PDF Downloads 254453 Assumption of Cognitive Goals in Science Learning
Authors: Mihail Calalb
Abstract:
The aim of this research is to identify ways for achieving sustainable conceptual understanding within science lessons. For this purpose, a set of teaching and learning strategies, parts of the theory of visible teaching and learning (VTL), is studied. As a result, a new didactic approach named "learning by being" is proposed and its correlation with educational paradigms existing nowadays in science teaching domain is analysed. In the context of VTL the author describes the main strategies of "learning by being" such as guided self-scaffolding, structuring of information, and recurrent use of previous knowledge or help seeking. Due to the synergy effect of these learning strategies applied simultaneously in class, the impact factor of learning by being on cognitive achievement of students is up to 93 % (the benchmark level is equal to 40% when an experienced teacher applies permanently the same conventional strategy during two academic years). The key idea in "learning by being" is the assumption by the student of cognitive goals. From this perspective, the article discusses the role of student’s personal learning effort within several teaching strategies employed in VTL. The research results emphasize that three mandatory student – related moments are present in each constructivist teaching approach: a) students’ personal learning effort, b) student – teacher mutual feedback and c) metacognition. Thus, a successful educational strategy will target to achieve an involvement degree of students into the class process as high as possible in order to make them not only know the learning objectives but also to assume them. In this way, we come to the ownership of cognitive goals or students’ deep intrinsic motivation. A series of approaches are inherent to the students’ ownership of cognitive goals: independent research (with an impact factor on cognitive achievement equal to 83% according to the results of VTL); knowledge of success criteria (impact factor – 113%); ability to reveal similarities and patterns (impact factor – 132%). Although it is generally accepted that the school is a public service, nonetheless it does not belong to entertainment industry and in most of cases the education declared as student – centered actually hides the central role of the teacher. Even if there is a proliferation of constructivist concepts, mainly at the level of science education research, we have to underline that conventional or frontal teaching, would never disappear. Research results show that no modern method can replace an experienced teacher with strong pedagogical content knowledge. Such a teacher will inspire and motivate his/her students to love and learn physics. The teacher is precisely the condensation point for an efficient didactic strategy – be it constructivist or conventional. In this way, we could speak about "hybridized teaching" where both the student and the teacher have their share of responsibility. In conclusion, the core of "learning by being" approach is guided learning effort that corresponds to the notion of teacher–student harmonic oscillator, when both things – guidance from teacher and student’s effort – are equally important.Keywords: conceptual understanding, learning by being, ownership of cognitive goals, science learning
Procedia PDF Downloads 170452 Investigating the Algorithm to Maintain a Constant Speed in the Wankel Engine
Authors: Adam Majczak, Michał Bialy, Zbigniew Czyż, Zdzislaw Kaminski
Abstract:
Increasingly stringent emission standards for passenger cars require us to find alternative drives. The share of electric vehicles in the sale of new cars increases every year. However, their performance and, above all, range cannot be today successfully compared to those of cars with a traditional internal combustion engine. Battery recharging lasts hours, which can be hardly accepted due to the time needed to refill a fuel tank. Therefore, the ways to reduce the adverse features of cars equipped with electric motors only are searched for. One of the methods is a combination of an electric engine as a main source of power and a small internal combustion engine as an electricity generator. This type of drive enables an electric vehicle to achieve a radically increased range and low emissions of toxic substances. For several years, the leading automotive manufacturers like the Mazda and the Audi together with the best companies in the automotive industry, e.g., AVL have developed some electric drive systems capable of recharging themselves while driving, known as a range extender. An electricity generator is powered by a Wankel engine that has seemed to pass into history. This low weight and small engine with a rotating piston and a very low vibration level turned out to be an excellent source in such applications. Its operation as an energy source for a generator almost entirely eliminates its disadvantages like high fuel consumption, high emission of toxic substances, or short lifetime typical of its traditional application. The operation of the engine at a constant rotational speed enables a significant increase in its lifetime, and its small external dimensions enable us to make compact modules to drive even small urban cars like the Audi A1 or the Mazda 2. The algorithm to maintain a constant speed was investigated on the engine dynamometer with an eddy current brake and the necessary measuring apparatus. The research object was the Aixro XR50 rotary engine with the electronic power supply developed at the Lublin University of Technology. The load torque of the engine was altered during the research by means of the eddy current brake capable of giving any number of load cycles. The parameters recorded included speed and torque as well as a position of a throttle in an inlet system. Increasing and decreasing load did not significantly change engine speed, which means that control algorithm parameters are correctly selected. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: electric vehicle, power generator, range extender, Wankel engine
Procedia PDF Downloads 157451 Advancing Women's Participation in SIDS' Renewable Energy Sector: A Multicriteria Evaluation Framework
Authors: Carolina Mayen Huerta, Clara Ivanescu, Paloma Marcos
Abstract:
Due to their unique geographic challenges and the imperative to combat climate change, Small Island Developing States (SIDS) are experiencing rapid growth in the renewable energy (RE) sector. However, women's representation in formal employment within this burgeoning field remains significantly lower than their male counterparts. Conventional methodologies often overlook critical geographic data that influence women's job prospects. To address this gap, this paper introduces a Multicriteria Evaluation (MCE) framework designed to identify spatially enabling environments and restrictions affecting women's access to formal employment and business opportunities in the SIDS' RE sector. The proposed MCE framework comprises 24 key factors categorized into four dimensions: Individual, Contextual, Accessibility, and Place Characterization. "Individual factors" encompass personal attributes influencing women's career development, including caregiving responsibilities, exposure to domestic violence, and disparities in education. "Contextual factors" pertain to the legal and policy environment, influencing workplace gender discrimination, financial autonomy, and overall gender empowerment. "Accessibility factors" evaluate women's day-to-day mobility, considering travel patterns, access to public transport, educational facilities, RE job opportunities, healthcare facilities, and financial services. Finally, "Place Characterization factors" enclose attributes of geographical locations or environments. This dimension includes walkability, public transport availability, safety, electricity access, digital inclusion, fragility, conflict, violence, water and sanitation, and climatic factors in specific regions. The analytical framework proposed in this paper incorporates a spatial methodology to visualize regions within countries where conducive environments for women to access RE jobs exist. In areas where these environments are absent, the methodology serves as a decision-making tool to reinforce critical factors, such as transportation, education, and internet access, which currently hinder access to employment opportunities. This approach is designed to equip policymakers and institutions with data-driven insights, enabling them to make evidence-based decisions that consider the geographic dimensions of disparity. These insights, in turn, can help ensure the efficient allocation of resources to achieve gender equity objectives.Keywords: gender, women, spatial analysis, renewable energy, access
Procedia PDF Downloads 70450 A Multicriteria Evaluation Framework for Enhancing Women's Participation in SIDS Renewable Energy Sector
Authors: Carolina Mayen Huerta, Clara Ivanescu, Paloma Marcos
Abstract:
Due to their unique geographic challenges and the imperative to combat climate change, Small Island Developing States (SIDS) are experiencing rapid growth in the renewable energy (RE) sector. However, women's representation in formal employment within this burgeoning field remains significantly lower than their male counterparts. Conventional methodologies often overlook critical geographic data that influence women's job prospects. To address this gap, this paper introduces a Multicriteria Evaluation (MCE) framework designed to identify spatially enabling environments and restrictions affecting women's access to formal employment and business opportunities in the SIDS' RE sector. The proposed MCE framework comprises 24 key factors categorized into four dimensions: Individual, Contextual, Accessibility, and Place Characterization. "Individual factors" encompass personal attributes influencing women's career development, including caregiving responsibilities, exposure to domestic violence, and disparities in education. "Contextual factors" pertain to the legal and policy environment, influencing workplace gender discrimination, financial autonomy, and overall gender empowerment. "Accessibility factors" evaluate women's day-to-day mobility, considering travel patterns, access to public transport, educational facilities, RE job opportunities, healthcare facilities, and financial services. Finally, "Place Characterization factors" enclose attributes of geographical locations or environments. This dimension includes walkability, public transport availability, safety, electricity access, digital inclusion, fragility, conflict, violence, water and sanitation, and climatic factors in specific regions. The analytical framework proposed in this paper incorporates a spatial methodology to visualize regions within countries where conducive environments for women to access RE jobs exist. In areas where these environments are absent, the methodology serves as a decision-making tool to reinforce critical factors, such as transportation, education, and internet access, which currently hinder access to employment opportunities. This approach is designed to equip policymakers and institutions with data-driven insights, enabling them to make evidence-based decisions that consider the geographic dimensions of disparity. These insights, in turn, can help ensure the efficient allocation of resources to achieve gender equity objectives.Keywords: gender, women, spatial analysis, renewable energy, access
Procedia PDF Downloads 85449 Fuel Cells Not Only for Cars: Technological Development in Railways
Authors: Marita Pigłowska, Beata Kurc, Paweł Daszkiewicz
Abstract:
Railway vehicles are divided into two groups: traction (powered) vehicles and wagons. The traction vehicles include locomotives (line and shunting), railcars (sometimes referred to as railbuses), and multiple units (electric and diesel), consisting of several or a dozen carriages. In vehicles with diesel traction, fuel energy (petrol, diesel, or compressed gas) is converted into mechanical energy directly in the internal combustion engine or via electricity. In the latter case, the combustion engine generator produces electricity that is then used to drive the vehicle (diesel-electric drive or electric transmission). In Poland, such a solution dominates both in heavy linear and shunting locomotives. The classic diesel drive is available for the lightest shunting locomotives, railcars, and passenger diesel multiple units. Vehicles with electric traction do not have their own source of energy -they use pantographs to obtain electricity from the traction network. To determine the competitiveness of the hydrogen propulsion system, it is essential to understand how it works. The basic elements of the construction of a railway vehicle drive system that uses hydrogen as a source of traction force are fuel cells, batteries, fuel tanks, traction motors as well as main and auxiliary converters. The compressed hydrogen is stored in tanks usually located on the roof of the vehicle. This resource is supplemented with the use of specialized infrastructure while the vehicle is stationary. Hydrogen is supplied to the fuel cell, where it oxidizes. The effect of this chemical reaction is electricity and water (in two forms -liquid and water vapor). Electricity is stored in batteries (so far, lithium-ion batteries are used). Electricity stored in this way is used to drive traction motors and supply onboard equipment. The current generated by the fuel cell passes through the main converter, whose task is to adjust it to the values required by the consumers, i.e., batteries and the traction motor. The work will attempt to construct a fuel cell with unique electrodes. This research is a trend that connects industry with science. The first goal will be to obtain hydrogen on a large scale in tube furnaces, to thoroughly analyze the obtained structures (IR), and to apply the method in fuel cells. The second goal is to create low-energy energy storage and distribution station for hydrogen and electric vehicles. The scope of the research includes obtaining a carbon variety and obtaining oxide systems on a large scale using a tubular furnace and then supplying vehicles. Acknowledgments: This work is supported by the Polish Ministry of Science and Education, project "The best of the best! 4.0", number 0911/MNSW/4968 – M.P. and grant 0911/SBAD/2102—B.K.Keywords: railway, hydrogen, fuel cells, hybrid vehicles
Procedia PDF Downloads 189448 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches
Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys
Abstract:
Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites
Procedia PDF Downloads 205447 Investigating the Flow Physics within Vortex-Shockwave Interactions
Authors: Frederick Ferguson, Dehua Feng, Yang Gao
Abstract:
No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme
Procedia PDF Downloads 139446 Inhibition of the Activity of Polyphenol Oxidase Enzyme Present in Annona muricata and Musa acuminata by the Experimentally Identified Natural Anti-Browning Agents
Authors: Michelle Belinda S. Weerawardana, Gobika Thiripuranathar, Priyani A. Paranagama
Abstract:
Most of fresh vegetables and fruits available in the retail markets undergo a physiological disorder in its appearance and coloration, which indeed discourages consumer purchase. A loss of millions of dollars yearly to the food industry had been due to this pronounced color reaction called Enzymatic Browning which is driven due to the catalytic activity by an oxidoreductase enzyme, polyphenol oxidase (PPO). The enzyme oxidizes the phenolic compounds which are abundantly available in fruits and vegetables as substrates into quinones, which could react with proteins in its surrounding to generate black pigments, called melanins, which are highly UV-active compounds. Annona muricata (Katu anoda) and Musa acuminata (Ash plantains) is a fruit and a vegetable consumed by Sri Lankans widely due to their high nutritional values, medicinal properties and economical importance. The objective of the present study was to evaluate and determine the effective natural anti-browning inhibitors that could prevent PPO activity in the selected fruit and vegetable. Enzyme extracts from Annona muricata (Katu anoda) and Musa acuminata (Ash plantains), were prepared by homogenizing with analytical grade acetone, and pH of each enzyme extract was maintained at 7.0 using a phosphate buffer. The extracts of inhibitors were prepared using powdered ginger rhizomes and essential oil from the bark of Cinnamomum zeylanicum. Water extracts of ginger were prepared and the essential oil from Ceylon cinnamon bark was extracted using steam distillation method. Since the essential oil is not soluble in water, 0.1µl of cinnamon bark oil was mixed with 0.1µl of Triton X-100 emulsifier and 5.00 ml of water. The effect of each inhibitor on the PPO activity was investigated using catechol (0.1 mol dm-3) as the substrate and two samples of enzyme extracts prepared. The dosages of the prepared Cinnamon bark oil, and ginger (2 samples) which were used to measure the activity were 0.0035 g/ml, 0.091 g/ml and 0.087 g/ml respectively. The measurements of the inhibitory activity were obtained at a wavelength of 525 nm using the UV-visible spectrophotometer. The results evaluated thus revealed that % inhibition observed with cinnamon bark oil, and ginger for Annona muricata was 51.97%, and 60.90% respectively. The effects of cinnamon bark oil, and ginger extract on PPO activity of Musa acuminata were 49.51%, and 48.10%. The experimental findings thus revealed that Cinnamomum zeylanicum bark oil was a more effective inhibitor for PPO enzyme present in Musa acuminata and ginger was effective for PPO enzyme present in Annona muricata. Overall both the inhibitors were proven to be more effective towards the activities of PPO enzyme present in both samples. These inhibitors can thus be corroborated as effective, natural, non-toxic, anti-browning extracts, which when added to the above fruit and vegetable will increase the shelf life and also the acceptance of the product by the consumers.Keywords: anti-browning agent, enzymatic browning, inhibitory activity, polyphenol oxidase
Procedia PDF Downloads 276445 Leadership and Entrepreneurship in Higher Education: Fostering Innovation and Sustainability
Authors: Naziema Begum Jappie
Abstract:
Leadership and entrepreneurship in higher education have become critical components in navigating the evolving landscape of academia in the 21st century. This abstract explores the multifaceted relationship between leadership and entrepreneurship within the realm of higher education, emphasizing their roles in fostering innovation and sustainability. Higher education institutions, often characterized as slow-moving and resistant to change, are facing unprecedented challenges. Globalization, rapid technological advancements, changing student demographics, and financial constraints necessitate a reimagining of traditional models. Leadership in higher education must embrace entrepreneurial thinking to effectively address these challenges. Entrepreneurship in higher education involves cultivating a culture of innovation, risk-taking, and adaptability. Visionary leaders who promote entrepreneurship within their institutions empower faculty and staff to think creatively, seek new opportunities, and engage with external partners. These entrepreneurial efforts lead to the development of novel programs, research initiatives, and sustainable revenue streams. Innovation in curriculum and pedagogy is a central aspect of leadership and entrepreneurship in higher education. Forward-thinking leaders encourage faculty to experiment with teaching methods and technology, fostering a dynamic learning environment that prepares students for an ever-changing job market. Entrepreneurial leadership also facilitates the creation of interdisciplinary programs that address emerging fields and societal challenges. Collaboration is key to entrepreneurship in higher education. Leaders must establish partnerships with industry, government, and non-profit organizations to enhance research opportunities, secure funding, and provide real-world experiences for students. Entrepreneurial leaders leverage their institutions' resources to build networks that extend beyond campus boundaries, strengthening their positions in the global knowledge economy. Financial sustainability is a pressing concern for higher education institutions. Entrepreneurial leadership involves diversifying revenue streams through innovative fundraising campaigns, partnerships, and alternative educational models. Leaders who embrace entrepreneurship are better equipped to navigate budget constraints and ensure the long-term viability of their institutions. In conclusion, leadership and entrepreneurship are intertwined elements essential to the continued relevance and success of higher education institutions. Visionary leaders who champion entrepreneurship foster innovation, enhance the student experience, and secure the financial future of their institutions. As academia continues to evolve, leadership and entrepreneurship will remain indispensable tools in shaping the future of higher education. This abstract underscores the importance of these concepts and their potential to drive positive change within the higher education landscape.Keywords: entrepreneurship, higher education, innovation, leadership
Procedia PDF Downloads 71444 Mega Sporting Events and Branding: Marketing Implications for the Host Country’s Image
Authors: Scott Wysong
Abstract:
Qatar will spend billions of dollars to host the 2022 World Cup. While football fans around the globe get excited to cheer on their favorite team every four years, critics debate the merits of a country hosting such an expensive and large-scale event. That is, the host countries spend billions of dollars on stadiums and infrastructure to attract these mega sporting events with the hope of equitable returns in economic impact and creating jobs. Yet, in many cases, the host countries are left in debt with decaying venues. There are benefits beyond the economic impact of hosting mega-events. For example, citizens are often proud of their city/country to host these famous events. Yet, often overlooked in the literature is the proposition that serving as the host for a mega-event may enhance the country’s brand image, not only as a tourist destination but for the products made in that country of origin. This research aims to explore this phenomenon by taking an exploratory look at consumer perceptions of three host countries of a mega-event in sports. In 2014, the U.S., Chinese and Finn (Finland) consumer attitudes toward Brazil and its products were measured before and after the World Cup via surveys (n=89). An Analysis of Variance (ANOVA) revealed that there were no statistically significant differences in the pre-and post-World Cup perceptions of Brazil’s brand personality or country-of-origin image. After the World Cup in 2018, qualitative interviews were held with U.S. sports fans (n=17) in an effort to further explore consumer perceptions of products made in the host country: Russia. A consistent theme of distrust and corruption with Russian products emerged despite their hosting of this prestigious global event. In late 2021, U.S. football (soccer) fans (n=42) and non-fans (n=37) were surveyed about the upcoming 2022 World Cup. A regression analysis revealed that how much an individual indicated that they were a soccer fan did not significantly influence their desire to visit Qatar or try products from Qatar in the future even though the country was hosting the World Cup—in the end, hosting a mega-event as grand as the World Cup showcases the country to the world. However, it seems to have little impact on consumer perceptions of the country, as a whole, or its brands. That is, the World Cup appeared to enhance already pre-existing stereotypes about Brazil (e.g., beaches, partying and fun, yet with crime and poverty), Russia (e.g., cold weather, vodka and business corruption) and Qatar (desert and oil). Moreover, across all three countries, respondents could rarely name a brand from the host country. Because mega-events cost a lot of time and money, countries need to do more to market their country and its brands when hosting. In addition, these countries would be wise to measure the impact of the event from different perspectives. Hence, we put forth a comprehensive future research agenda to further the understanding of how countries, and their brands, can benefit from hosting a mega sporting event.Keywords: branding, country-of-origin effects, mega sporting events, return on investment
Procedia PDF Downloads 282443 Tourism Management of the Heritage and Archaeological Sites in Egypt
Authors: Sabry A. El Azazy
Abstract:
The archaeological heritage sites are one of the most important touristic attractions worldwide. Egypt has various archaeological sites and historical locations that are classified within the list of the archaeological heritage destinations in the world, such as Cairo, Luxor, Aswan, Alexandria, and Sinai. This study focuses on how to manage the archaeological sites and provide them with all services according to the traveler's needs. Tourism management depends on strategic planning for supporting the national economy and sustainable development. Additionally, tourism management has to utilize highly effective standards of security, promotion, advertisement, sales, and marketing while taking into consideration the preservation of monuments. In Egypt, the archaeological heritage sites must be well-managed and protected, which would assist tourism management, especially in times of crisis. Recently, the monumental places and archeological heritage sites were affected by unstable conditions and were threatened. It is essential to focus on preserving our heritage. Moreover, more efforts and cooperation between the tourism organizations and ministry of archaeology have to be done in order to protect the archaeology and promote the tourism industry. Methodology: Qualitative methods have been used as the overall approach to this study. Interviews and observations have provided the researcher with the required in-depth insight to the research subject. The researcher was a lecturer of tourist guidance that allows visiting all historical sites in Egypt. Additionally, the researcher had the privilege to communicate with tourism specialists and attend meetings, conferences, and events that were focused on the research subject. Objectives: The main purpose of the research was gaining information in order to develop theoretical research on how to effectively benefit out of those historical sights both economically and culturally, and pursue further researches and scientific studies to be well-suited for tourism and hospitality sector. The researcher works hard to present further studies in a field related to tourism and archaeological heritage using previous experience. Pursing this course of study enables the researcher to acquire the necessary abilities and competencies to achieve the set goal successfully. Results: The professional tourism management focus on making Egypt one of the most important destinations in the world, and provide the heritage and archaeological sites with all services that will place those locations into the international map of tourism. Tourists interested in visiting Egypt and making tourism flourish supports and strengths Egypt's national economy and the local community, taking into consideration preserving our heritage and archaeology. Conclusions: Egypt has many tourism attractions represented in the heritage, archaeological sites, and touristic places. These places need more attention and efforts to be included in tourism programs and be opened for visitors from all over the world. These efforts will encourage both local and international tourism to see our great civilization and provide different touristic activities.Keywords: archaeology, archaeological sites, heritage, ministry of archaeology, national economy, touristic attractions, tourism management, tourism organizations
Procedia PDF Downloads 145442 Articulating the Colonial Relation, a Conversation between Afropessimism and Anti-Colonialism
Authors: Thomas Compton
Abstract:
As Decolonialism becomes an important topic in Political Theory, the rupture between the colonized and the colonist relation has lost attention. Focusing on the anti-colonial activist Madhi Amel, we shall consider his attention to the permanence of the colonial relation and how it preempts Frank Wilderson’s formulation of (white) culturally necessary Anti-Black violence. Both projects draw attention away from empirical accounts of oppression, instead focusing on the structural relation which precipitates them. As Amel says that we should stop thinking of the ‘underdeveloped’ as beyond the colonial relation, Wilderson says we should stop think of the Black rights that have surpassed the role of the slave. However, Amel moves beyond his idol Althusser’s Structuralism toward a formulation of the colonial relation as source of domination. Our analysis will take a Lacanian turn in considering how this non-relation was formulated as a relation how this space of negativity became a ideological opportunity for Colonial domination. Wilderson’s work shall problematise this as we conclude with his criticisms of Structural accounts for the failure to consider how Black social death exists as more than necessity but a cite of white desire. Amel, a Lebanese activist and scholar (re)discovered by Hicham Safieddine, argues colonialism is more than the theft of land, but instead a privatization of collective property and form of investment which (re)produces the status of the capitalist in spaces ‘outside’ the market. Although Amel was a true Marxist-Leninsist, who exposited the economic determinacy of the Colonial Mode of Production, we are reading this account through Alenka Zupančič’s reformulation of the ‘invisible hand job of the market’. Amel points to the signifier ‘underdeveloped’ as buttressed on a pre-colonial epistemic break, as the Western investor (debt collector) sees the (post?) colony narcissistic image. However, the colony can never become site of class conflict, as the workers are not unified but existing between two countries. In industry, they are paid in Colonial subjectivisation, the promise of market (self)pleasure, at home, they are refugees. They are not, as Wilderson states, in the permanent social death of the slave, but they are less than the white worker. This is formulated as citizen (white), non-citizen (colonized), anti-citizen (Black/slave). Here we may also think of how indentured Indians were used as instruments of colonial violence. Wilderson’s aphorism “there is no analogy to anti-Black violence” lays bare his fundamental opposition between colonial and specifically anti-Black violence. It is not only that the debt collector, landowner, or other owners of production pleasures themselves as if their hand is invisible. The absolute negativity between colony and colonized provides a new frontier for desire, the development of a colonial mode of production. An invention inside the colonial structure that is generative of class substitution. We shall explore how Amel ignores the role of the slave but how Wilderson forecloses the history African anti-colonial.Keywords: afropessimism, fanon, marxism, postcolonialism
Procedia PDF Downloads 155441 The Misuse of Free Cash and Earnings Management: An Analysis of the Extent to Which Board Tenure Mitigates Earnings Management
Authors: Michael McCann
Abstract:
Managerial theories propose that, in joint stock companies, executives may be tempted to waste excess free cash on unprofitable projects to keep control of resources. In order to conceal their projects' poor performance, they may seek to engage in earnings management. On the one hand, managers may manipulate earnings upwards in order to post ‘good’ performances and safeguard their position. On the other, since managers pursuit of unrewarding investments are likely to lead to low long-term profitability, managers will use negative accruals to reduce current year’s earnings, smoothing earnings over time in order to conceal the negative effects. Agency models argue that boards of directors are delegated by shareholders to ensure that companies are governed properly. Part of that responsibility is ensuring the reliability of financial information. Analyses of the impact of board characteristics, particularly board independence on the misuse of free cash flow and earnings management finds conflicting evidence. However, existing characterizations of board independence do not account for such directors gaining firm-specific knowledge over time, influencing their monitoring ability. Further, there is little analysis of the influence of the relative experience of independent directors and executives on decisions surrounding the use of free cash. This paper contributes to this literature regarding the heterogeneous characteristics of boards by investigating the influence of independent director tenure on earnings management and the relative tenures of independent directors and Chief Executives. A balanced panel dataset comprising 51 companies across 11 annual periods from 2005 to 2015 is used for the analysis. In each annual period, firms were classified as conducting earnings management if they had discretionary accruals in the bottom quartile (downwards) and top quartile (upwards) of the distributed values for the sample. Logistical regressions were conducted to determine the marginal impact of independent board tenure and a number of control variables on the probability of conducting earnings management. The findings indicate that both absolute and relative measures of board independence and experience do not have a significant impact on the likelihood of earnings management. It is the level of free cash flow which is the major influence on the probability of earnings management. Higher free cash flow increases the probability of earnings management significantly. The research also investigates whether board monitoring of earnings management is contingent on the level of free cash flow. However, the results suggest that board monitoring is not amplified when free cash flow is higher. This suggests that the extent of earnings management in companies is determined by a range of company, industry and situation-specific factors.Keywords: corporate governance, boards of directors, agency theory, earnings management
Procedia PDF Downloads 236440 Simultaneous Optimization of Design and Maintenance through a Hybrid Process Using Genetic Algorithms
Authors: O. Adjoul, A. Feugier, K. Benfriha, A. Aoussat
Abstract:
In general, issues related to design and maintenance are considered in an independent manner. However, the decisions made in these two sets influence each other. The design for maintenance is considered an opportunity to optimize the life cycle cost of a product, particularly in the nuclear or aeronautical field, where maintenance expenses represent more than 60% of life cycle costs. The design of large-scale systems starts with product architecture, a choice of components in terms of cost, reliability, weight and other attributes, corresponding to the specifications. On the other hand, the design must take into account maintenance by improving, in particular, real-time monitoring of equipment through the integration of new technologies such as connected sensors and intelligent actuators. We noticed that different approaches used in the Design For Maintenance (DFM) methods are limited to the simultaneous characterization of the reliability and maintainability of a multi-component system. This article proposes a method of DFM that assists designers to propose dynamic maintenance for multi-component industrial systems. The term "dynamic" refers to the ability to integrate available monitoring data to adapt the maintenance decision in real time. The goal is to maximize the availability of the system at a given life cycle cost. This paper presents an approach for simultaneous optimization of the design and maintenance of multi-component systems. Here the design is characterized by four decision variables for each component (reliability level, maintainability level, redundancy level, and level of monitoring data). The maintenance is characterized by two decision variables (the dates of the maintenance stops and the maintenance operations to be performed on the system during these stops). The DFM model helps the designers choose technical solutions for the large-scale industrial products. Large-scale refers to the complex multi-component industrial systems and long life-cycle, such as trains, aircraft, etc. The method is based on a two-level hybrid algorithm for simultaneous optimization of design and maintenance, using genetic algorithms. The first level is to select a design solution for a given system that considers the life cycle cost and the reliability. The second level consists of determining a dynamic and optimal maintenance plan to be deployed for a design solution. This level is based on the Maintenance Free Operating Period (MFOP) concept, which takes into account the decision criteria such as, total reliability, maintenance cost and maintenance time. Depending on the life cycle duration, the desired availability, and the desired business model (sales or rental), this tool provides visibility of overall costs and optimal product architecture.Keywords: availability, design for maintenance (DFM), dynamic maintenance, life cycle cost (LCC), maintenance free operating period (MFOP), simultaneous optimization
Procedia PDF Downloads 119439 The Elimination of Fossil Fuel Subsidies from the Road Transportation Sector and the Promotion of Electro Mobility: The Ecuadorian Case
Authors: Henry Acurio, Alvaro Corral, Juan Fonseca
Abstract:
In Ecuador, subventions on fossil fuels for the road transportation sector have always been part of its economy throughout time, mainly because of demagogy and populism from political leaders. It is clearly seen that the government cannot maintain the subsidies anymore due to its commercial balance and its general state budget; subsidies are a key barrier to implementing the use of cleaner technologies. However, during the last few months, the elimination of subsidies has been done gradually with the purpose of reaching international prices. It is expected that with this measure, the population will opt for other means of transportation, and in a certain way, it will promote the use of private electric vehicles and public, e.g., taxis and buses (urban transport). Considering the three main elements of sustainable development, an analysis of the social, economic, and environmental impacts of eliminating subsidies will be generated at the country level. To achieve this, four scenarios will be developed in order to determine how the subsidies will contribute to the promotion of electro-mobility: 1) A Business as Usual (BAU) scenario; 2) the introduction of 10 000 electric vehicles by 2025; 3) the introduction of 100 000 electric vehicles by 2030; 4) the introduction of 750 000 electric vehicles by 2040 (for all the scenarios, buses, taxis, lightweight duty vehicles, and private vehicles will be introduced, as it is established in the National Electro Mobility Strategy for Ecuador). The Low Emissions Analysis Platform (LEAP) will be used, and it will be suitable to determine the cost for the government in terms of importing derivatives for fossil fuels and the cost of electricity to power the electric fleet that can be changed. The elimination of subventions generates fiscal resources for the state that can be used to develop other kinds of projects that will benefit Ecuadorian society. It will definitely change the energy matrix, and it will provide energy security for the country; it will be an opportunity for the government to incentivize a greater introduction of renewable energies, e.g., solar, wind, and geothermal. At the same time, it will also reduce greenhouse gas emissions (GHG) from the transportation sector, considering its mitigation potential, which as a result, will ameliorate the inhabitant quality of life by improving the quality of air, therefore reducing respiratory diseases associated with exhaust emissions, consequently, achieving sustainability, the Sustainable Development Goals (SDGs), and complying with the agreements established in the Paris Agreement COP 21 in 2015. Electro-mobility in Latin America and the Caribbean can only be achieved by the implementation of the right policies by the central government, which need to be accompanied by a National Urban Mobility Policy (NUMP), and can encompass a greater vision to develop holistic, sustainable transport systems at local governments.Keywords: electro mobility, energy, policy, sustainable transportation
Procedia PDF Downloads 83438 Determination of 1-Deoxynojirimycin and Phytochemical Profile from Mulberry Leaves Cultivated in Indonesia
Authors: Yasinta Ratna Esti Wulandari, Vivitri Dewi Prasasty, Adrianus Rio, Cindy Geniola
Abstract:
Mulberry is a plant that widely cultivated around the world, mostly for silk industry. In recent years, the study showed that the mulberry leaves have an anti-diabetic effect which mostly comes from the compound known as 1-deoxynojirimycin (DNJ). DNJ is a very potent α-glucosidase inhibitor. It will decrease the degradation rate of carbohydrates in digestive tract, leading to slower glucose absorption and reducing the post-prandial glucose level significantly. The mulberry leaves also known as the best source of DNJ. Since then, the DNJ in mulberry leaves had received a considerable attention, because of the increased number of diabetic patients and the raise of people awareness to find a more natural cure for diabetic. The DNJ content in mulberry leaves varied depend on the mulberry species, leaf’s age, and the plant’s growth environment. Few of the mulberry varieties that were cultivated in Indonesiaare Morus alba var. kanva-2, M. alba var. multicaulis, M. bombycis var. lembang, and M. cathayana. The lack of data concerning phytochemicals contained in the Indonesian mulberry leaves are restraining their use in the medicinal field. The aim of this study is to fully utilize the use of mulberry leaves cultivated in Indonesia as a medicinal herb in local, national, or global community, by determining the DNJ and other phytochemical contents in them. This study used eight leaf samples which are the young leaves and mature leaves of both Morus alba var. kanva-2, M. alba var. multicaulis, M. bombycis var. lembang, and M. cathayana. The DNJ content was analyzed using reverse phase high performance liquid chromatography (HPLC). The stationary phase was silica C18 column and the mobile phase was acetonitrile:acetic acid 0.1% 1:1 with elution rate 1 mL/min. Prior to HPLC analysis the samples were derivatized with FMOC to ensure the DNJ detectable by VWD detector at 254 nm. Results showed that the DNJ content in samples are ranging from 2.90-0.07 mg DNJ/ g leaves, with the highest content found in M. cathayana mature leaves (2.90 ± 0.57 mg DNJ/g leaves). All of the mature leaf samples also found to contain higher amount of DNJ from their respective young leaf samples. The phytochemicals in leaf samples was tested using qualitative test. Result showed that all of the eight leaf samples contain alkaloids, phenolics, flavonoids, tannins, and terpenes. The presence of this phytochemicals contribute to the therapeutic effect of mulberry leaves. The pyrolysis-gas chromatography-mass spectrometry (Py-GC-MS) analysis was also performed to the eight samples to quantitatively determine their phytochemicals content. The pyrolysis temperature was set at 400 °C, with capillary column Phase Rtx-5MS 60 × 0.25 mm ID stationary phase and helium gas mobile phase. Few of the terpenes found are known to have anticancer and antimicrobial properties. From all the results, all of four samples of mulberry leaves which are cultivated in Indonesia contain DNJ and various phytochemicals like alkaloids, phenolics, flavonoids, tannins, and terpenes which are beneficial to our health.Keywords: Morus, 1-deoxynojirimycin, HPLC, Py-GC-MS
Procedia PDF Downloads 331437 Applying Miniaturized near Infrared Technology for Commingled and Microplastic Waste Analysis
Authors: Monika Rani, Claudio Marchesi, Stefania Federici, Laura E. Depero
Abstract:
Degradation of the aquatic environment by plastic litter, especially microplastics (MPs), i.e., any water-insoluble solid plastic particle with the longest dimension in the range 1µm and 1000 µm (=1 mm) size, is an unfortunate indication of the advancement of the Anthropocene age on Earth. Microplastics formed due to natural weathering processes are termed as secondary microplastics, while when these are synthesized in industries, they are called primary microplastics. Their presence from the highest peaks to the deepest points in oceans explored and their resistance to biological and chemical decay has adversely affected the environment, especially marine life. Even though the presence of MPs in the marine environment is well-reported, a legitimate and authentic analytical technique to sample, analyze, and quantify the MPs is still under progress and testing stages. Among the characterization techniques, vibrational spectroscopic techniques are largely adopted in the field of polymers. And the ongoing miniaturization of these methods is on the way to revolutionize the plastic recycling industry. In this scenario, the capability and the feasibility of a miniaturized near-infrared (MicroNIR) spectroscopy combined with chemometrics tools for qualitative and quantitative analysis of urban plastic waste collected from a recycling plant and microplastic mixture fragmented in the lab were investigated. Based on the Resin Identification Code, 250 plastic samples were used for macroplastic analysis and to set up a library of polymers. Subsequently, MicroNIR spectra were analysed through the application of multivariate modelling. Principal Components Analysis (PCA) was used as an unsupervised tool to find trends within the data. After the exploratory PCA analysis, a supervised classification tool was applied in order to distinguish the different plastic classes, and a database containing the NIR spectra of polymers was made. For the microplastic analysis, the three most abundant polymers in the plastic litter, PE, PP, PS, were mechanically fragmented in the laboratory to micron size. The distinctive arrangement of blends of these three microplastics was prepared in line with a designed ternary composition plot. After the PCA exploratory analysis, a quantitative model Partial Least Squares Regression (PLSR) allowed to predict the percentage of microplastics in the mixtures. With a complete dataset of 63 compositions, PLS was calibrated with 42 data-points. The model was used to predict the composition of 21 unknown mixtures of the test set. The advantage of the consolidated NIR Chemometric approach lies in the quick evaluation of whether the sample is macro or micro, contaminated, coloured or not, and with no sample pre-treatment. The technique can be utilized with bigger example volumes and even considers an on-site evaluation and in this manner satisfies the need for a high-throughput strategy.Keywords: chemometrics, microNIR, microplastics, urban plastic waste
Procedia PDF Downloads 165436 Phenomena-Based Approach for Automated Generation of Process Options and Process Models
Authors: Parminder Kaur Heer, Alexei Lapkin
Abstract:
Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.Keywords: Phenomena, Process intensification, Process models , Process options
Procedia PDF Downloads 234435 Assessment of Current and Future Opportunities of Chemical and Biological Surveillance of Wastewater for Human Health
Authors: Adam Gushgari
Abstract:
The SARS-CoV-2 pandemic has catalyzed the rapid adoption of wastewater-based epidemiology (WBE) methodologies both domestically and internationally. To support the rapid scale-up of pandemic-response wastewater surveillance systems, multiple federal agencies (i.e. US CDC), non-government organizations (i.e. Water Environment Federation), and private charities (i.e. Bill and Melinda Gates Foundation) have funded over $220 million USD supporting development and expanding equitable access of surveillance methods. Funds were primarily distributed directly to municipalities under the CARES Act (90.6%), followed by academic projects (7.6%), and initiatives developed by private companies (1.8%). In addition to federal funding for wastewater monitoring primarily conducted at wastewater treatment plants, state/local governments and private companies have leveraged wastewater sampling to obtain health and lifestyle data on student, prison inmate, and employee populations. We explore the viable paths for expansion of the WBE m1ethodology across a variety of analytical methods; the development of WBE-specific samplers and real-time wastewater sensors; and their application to various governments and private sector industries. Considerable investment in, and public acceptance of WBE suggests the methodology will be applied to other future notifiable diseases and health risks. Early research suggests that WBE methods can be applied to a host of additional “biological insults” including communicable diseases and pathogens, such as influenza, Cryptosporidium, Giardia, mycotoxin exposure, hepatitis, dengue, West Nile, Zika, and yellow fever. Interest in chemical insults is also likely, providing community health and lifestyle data on narcotics consumption, use of pharmaceutical and personal care products (PPCP), PFAS and hazardous chemical exposure, and microplastic exposure. Successful application of WBE to monitor analytes correlated with carcinogen exposure, community stress prevalence, and dietary indicators has also been shown. Additionally, technology developments of in situ wastewater sensors, WBE-specific wastewater samplers, and integration of artificial intelligence will drastically change the landscape of WBE through the development of “smart sewer” networks. The rapid expansion of the WBE field is creating significant business opportunities for professionals across the scientific, engineering, and technology industries ultimately focused on community health improvement.Keywords: wastewater surveillance, wastewater-based epidemiology, smart cities, public health, pandemic management, substance abuse
Procedia PDF Downloads 111434 Exploring the Social Health and Well-Being Factors of Hydraulic Fracturing
Authors: S. Grinnell
Abstract:
A PhD Research Project exploring the Social Health and Well-Being Impacts associated with Hydraulic Fracturing, with an aim to produce a Best Practice Support Guidance for those anticipating dealing with planning applications or submitting Environmental Impact Assessments (EIAs). Amid a possible global energy crisis, founded upon a number of factors, including unstable political situations, increasing world population growth, people living longer, it is perhaps inevitable that Hydraulic Fracturing (commonly referred to as ‘fracking’) will become a major player within the global long-term energy and sustainability agenda. As there is currently no best practice guidance for governing bodies the Best Practice Support Document will be targeted at a number of audiences including, consultants undertaking EIAs, Planning Officers, those commissioning EIAs Industry and interested public stakeholders. It will offer a robust, evidence-based criteria and recommendations which provide a clear narrative and consistent and shared approach to the language used along with containing an understanding of the issues identified. It is proposed that the Best Practice Support Document will also support the mitigation of health impacts identified. The Best Practice Support Document will support the newly amended Environmental Impact Assessment Directive (2011/92/EU), to be transposed into UK law by 2017. A significant amendment introduced focuses on, ‘higher level of protection to the environment and health.’ Methodology: A qualitative research methods approach is being taken with this research. It will have a number of key stages. A literature review has been undertaken and been critically reviewed and analysed. This was followed by a descriptive content analysis of a selection of international and national policies, programmes and strategies along with published Environmental Impact Assessments and associated planning guidance. In terms of data collection, a number of stakeholders were interviewed as well as a number of focus groups of local community groups potentially affected by fracking. These were determined from across the UK. A theme analysis of all the data collected and the literature review will be undertaken, using NVivo. Best Practice Supporting Document will be developed based on the outcomes of the analysis and be tested and piloted in the professional fields, before a live launch. Concluding statement: Whilst fracking is not a new concept, the technology is now driving a new force behind the use of this engineering to supply fuels. A number of countries have pledged moratoria on fracking until further investigation from the impacts on health have been explored, whilst other countries including Poland and the UK are pushing to support the use of fracking. If this should be the case, it will be important that the public’s concerns, perceptions, fears and objections regarding the wider social health and well-being impacts are considered along with the more traditional biomedical health impacts.Keywords: fracking, hydraulic fracturing, socio-economic health, well-being
Procedia PDF Downloads 244433 An Overview on Micro Irrigation-Accelerating Growth of Indian Agriculture
Authors: Rohit Lall
Abstract:
The adoption of Micro Irrigation (MI) technologies in India has helped in achieving higher cropping and irrigation intensity with significant savings on resource savings such as labour, fertilizer and improved crop yields. These technologies have received considerable attention from policymakers, growers and researchers over the years for its perceived ability to contribute towards agricultural productivity and economic growth with the well-being of the growers of the country. Keeping the pace with untapped theoretical potential to cover government had launched flagship programs/centre sector schemes with earmarked budget to capture the potential under these waters saving techniques envisaged under these technologies by way of providing financial assistance to the beneficiaries for adopting these technologies. Micro Irrigation technologies have been in the special attention of the policymakers over the years. India being an agrarian economy having engaged 75% of the population directly or indirectly having skilled, semi-skilled and entrepreneurs in the sector with focused attention and financial allocations from the government under these technologies in covering the untapped potential under Pradhan Mantri Krishi Sinchayee Yojana (PMKSY) 'Per Drop More Crop component.' During the year 2004, a Taskforce on Micro Irrigation was constituted to estimate the potential of these technologies in India which was estimated 69.5 million hectares by the Task Force Report on MI however only 10.49 million hectares have been achieved so far. Technology collaborations by leading manufacturing companies in overseas have proved to a stepping stone in technology advancement and product up gradation with increased efficiencies. Joint ventures by the leading MI companies have added huge business volumes which have not only accelerated the momentum of achieving the desired goal but in terms of area coverage but had also generated opportunities for the polymer manufacturers in the country. To provide products matching the global standards Bureau of Indian Standards have constituted a sectional technical committee under the Food and Agriculture Department (FAD)-17 to formulated/devise and revise standards pertaining to MI technologies. The research lobby has also contributed at large by developing in-situ analysis proving MI technologies a boon for farming community of the country with resource conservation of which water is of paramount importance. Thus, Micro Irrigation technologies have proved to be the key tool for feeding the grueling demand of food basket of the growing population besides maintaining soil health and have been contributing towards doubling of farmers’ income.Keywords: task force on MI, standards, per drop more crop, doubling farmers’ income
Procedia PDF Downloads 118432 Interactivity as a Predictor of Intent to Revisit Sports Apps
Authors: Young Ik Suh, Tywan G. Martin
Abstract:
Sports apps in a smartphone provide up-to-date information and fast and convenient access to live games. The market of sports apps has emerged as the second fastest growing app category worldwide. Further, many sports fans use their smartphones to know the schedule of sporting events, players’ position and bios, videos and highlights. In recent years, a growing number of scholars and practitioners alike have emphasized the importance of interactivity with sports apps, hypothesizing that interactivity plays a significant role in enticing sports apps users and that it is a key component in measuring the success of sports apps. Interactivity in sports apps focuses primarily on two functions: (1) two-way communication and (2) active user control, neither of which have been applicable through traditional mass media and communication technologies. Therefore, the purpose of this study is to examine whether the interactivity function on sports apps leads to positive outcomes such as intent to revisit. More specifically, this study investigates how three major functions of interactivity (i.e., two-way communication, active user control, and real-time information) influence the attitude of sports apps users and their intent to revisit the sports apps. The following hypothesis is proposed; interactivity functions will be positively associated with both attitudes toward sports apps and intent to revisit sports apps. The survey questionnaire includes four parts: (1) an interactivity scale, (2) an attitude scale, (3) a behavioral intention scale, and (4) demographic questions. Data are to be collected from ESPN apps users. To examine the relationships among the observed and latent variables and determine the reliability and validity of constructs, confirmatory factor analysis (CFA) is conducted. Structural equation modeling (SEM) is utilized to test hypothesized relationships among constructs. Additionally, this study compares the proposed interactivity model with a rival model to identify the role of attitude as a mediating factor. The findings of the current sports apps study provide several theoretical and practical contributions and implications by extending the research and literature associated with the important role of interactivity functions in sports apps and sports media consumption behavior. Specifically, this study may improve the theoretical understandings of whether the interactivity functions influence user attitudes and intent to revisit sports apps. Additionally, this study identifies which dimensions of interactivity are most important to sports apps users. From practitioners’ perspectives, this findings of this study provide significant implications. More entrepreneurs and investors in the sport industry need to recognize that high-resolution photos, live streams, and up-to-date stats are in the sports app, right at sports fans fingertips. The result will imply that sport practitioners may need to develop sports mobile apps that offer greater interactivity functions to attract sport fans.Keywords: interactivity, two-way communication, active user control, real time information, sports apps, attitude, intent to revisit
Procedia PDF Downloads 147431 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate
Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim
Abstract:
Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic
Procedia PDF Downloads 639430 Real-Time Monitoring of Complex Multiphase Behavior in a High Pressure and High Temperature Microfluidic Chip
Authors: Renée M. Ripken, Johannes G. E. Gardeniers, Séverine Le Gac
Abstract:
Controlling the multiphase behavior of aqueous biomass mixtures is essential when working in the biomass conversion industry. Here, the vapor/liquid equilibria (VLE) of ethylene glycol, glycerol, and xylitol were studied for temperatures between 25 and 200 °C and pressures of 1 to 10 bar. These experiments were performed in a microfluidic platform, which exhibits excellent heat transfer properties so that equilibrium is reached fast. Firstly, the saturated vapor pressure as a function of the temperature and the substrate mole fraction of the substrate was calculated using AspenPlus with a Redlich-Kwong-Soave Boston-Mathias (RKS-BM) model. Secondly, we developed a high-pressure and high-temperature microfluidic set-up for experimental validation. Furthermore, we have studied the multiphase flow pattern that occurs after the saturation temperature was achieved. A glass-silicon microfluidic device containing a 0.4 or 0.2 m long meandering channel with a depth of 250 μm and a width of 250 or 500 μm was fabricated using standard microfabrication techniques. This device was placed in a dedicated chip-holder, which includes a ceramic heater on the silicon side. The temperature was controlled and monitored by three K-type thermocouples: two were located between the heater and the silicon substrate, one to set the temperature and one to measure it, and the third one was placed in a 300 μm wide and 450 μm deep groove on the glass side to determine the heat loss over the silicon. An adjustable back pressure regulator and a pressure meter were added to control and evaluate the pressure during the experiment. Aqueous biomass solutions (10 wt%) were pumped at a flow rate of 10 μL/min using a syringe pump, and the temperature was slowly increased until the theoretical saturation temperature for the pre-set pressure was reached. First and surprisingly, a significant difference was observed between our theoretical saturation temperature and the experimental results. The experimental values were 10’s of degrees higher than the calculated ones and, in some cases, saturation could not be achieved. This discrepancy can be explained in different ways. Firstly, the pressure in the microchannel is locally higher due to both the thermal expansion of the liquid and the Laplace pressure that has to be overcome before a gas bubble can be formed. Secondly, superheating effects are likely to be present. Next, once saturation was reached, the flow pattern of the gas/liquid multiphase system was recorded. In our device, the point of nucleation can be controlled by taking advantage of the pressure drop across the channel and the accurate control of the temperature. Specifically, a higher temperature resulted in nucleation further upstream in the channel. As the void fraction increases downstream, the flow regime changes along the channel from bubbly flow to Taylor flow and later to annular flow. All three flow regimes were observed simultaneously. The findings of this study are key for the development and optimization of a microreactor for hydrogen production from biomass.Keywords: biomass conversion, high pressure and high temperature microfluidics, multiphase, phase diagrams, superheating
Procedia PDF Downloads 218429 Displaying Compostela: Literature, Tourism and Cultural Representation, a Cartographic Approach
Authors: Fernando Cabo Aseguinolaza, Víctor Bouzas Blanco, Alberto Martí Ezpeleta
Abstract:
Santiago de Compostela became a stable object of literary representation during the period between 1840 and 1915, approximately. This study offers a partial cartographical look at this process, suggesting that a cultural space like Compostela’s becoming an object of literary representation paralleled the first stages of its becoming a tourist destination. We use maps as a method of analysis to show the interaction between a corpus of novels and the emerging tradition of tourist guides on Compostela during the selected period. Often, the novels constitute ways to present a city to the outside, marking it for the gaze of others, as guidebooks do. That leads us to examine the ways of constructing and rendering communicable the local in other contexts. For that matter, we should also acknowledge the fact that a good number of the narratives in the corpus evoke the representation of the city through the figure of one who comes from elsewhere: a traveler, a student or a professor. The guidebooks coincide in this with the emerging fiction, of which the mimesis of a city is a key characteristic. The local cannot define itself except through a process of symbolic negotiation, in which recognition and self-recognition play important roles. Cartography shows some of the forms that these processes of symbolic representation take through the treatment of space. The research uses GIS to find significant models of representation. We used the program ArcGIS for the mapping, defining the databases starting from an adapted version of the methodology applied by Barbara Piatti and Lorenz Hurni’s team at the University of Zurich. First, we designed maps that emphasize the peripheral position of Compostela from a historical and institutional perspective using elements found in the texts of our corpus (novels and tourist guides). Second, other maps delve into the parallels between recurring techniques in the fictional texts and characteristic devices of the guidebooks (sketching itineraries and the selection of zones and indexicalization), like a foreigner’s visit guided by someone who knows the city or the description of one’s first entrance into the city’s premises. Last, we offer a cartography that demonstrates the connection between the best known of the novels in our corpus (Alejandro Pérez Lugín’s 1915 novel La casa de la Troya) and the first attempt to create package tourist tours with Galicia as a destination, in a joint venture of Galician and British business owners, in the years immediately preceding the Great War. Literary cartography becomes a crucial instrument for digging deeply into the methods of cultural production of places. Through maps, the interaction between discursive forms seemingly so far removed from each other as novels and tourist guides becomes obvious and suggests the need to go deeper into a complex process through which a city like Compostela becomes visible on the contemporary cultural horizon.Keywords: compostela, literary geography, literary cartography, tourism
Procedia PDF Downloads 393