Search results for: single-phase models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6480

Search results for: single-phase models

390 Soybean Seed Composition Prediction From Standing Crops Using Planet Scope Satellite Imagery and Machine Learning

Authors: Supria Sarkar, Vasit Sagan, Sourav Bhadra, Meghnath Pokharel, Felix B.Fritschi

Abstract:

Soybean and their derivatives are very important agricultural commodities around the world because of their wide applicability in human food, animal feed, biofuel, and industries. However, the significance of soybean production depends on the quality of the soybean seeds rather than the yield alone. Seed composition is widely dependent on plant physiological properties, aerobic and anaerobic environmental conditions, nutrient content, and plant phenological characteristics, which can be captured by high temporal resolution remote sensing datasets. Planet scope (PS) satellite images have high potential in sequential information of crop growth due to their frequent revisit throughout the world. In this study, we estimate soybean seed composition while the plants are in the field by utilizing PlanetScope (PS) satellite images and different machine learning algorithms. Several experimental fields were established with varying genotypes and different seed compositions were measured from the samples as ground truth data. The PS images were processed to extract 462 hand-crafted vegetative and textural features. Four machine learning algorithms, i.e., partial least squares (PLSR), random forest (RFR), gradient boosting machine (GBM), support vector machine (SVM), and two recurrent neural network architectures, i.e., long short-term memory (LSTM) and gated recurrent unit (GRU) were used in this study to predict oil, protein, sucrose, ash, starch, and fiber of soybean seed samples. The GRU and LSTM architectures had two separate branches, one for vegetative features and the other for textures features, which were later concatenated together to predict seed composition. The results show that sucrose, ash, protein, and oil yielded comparable prediction results. Machine learning algorithms that best predicted the six seed composition traits differed. GRU worked well for oil (R-Squared: of 0.53) and protein (R-Squared: 0.36), whereas SVR and PLSR showed the best result for sucrose (R-Squared: 0.74) and ash (R-Squared: 0.60), respectively. Although, the RFR and GBM provided comparable performance, the models tended to extremely overfit. Among the features, vegetative features were found as the most important variables compared to texture features. It is suggested to utilize many vegetation indices for machine learning training and select the best ones by using feature selection methods. Overall, the study reveals the feasibility and efficiency of PS images and machine learning for plot-level seed composition estimation. However, special care should be given while designing the plot size in the experiments to avoid mixed pixel issues.

Keywords: agriculture, computer vision, data science, geospatial technology

Procedia PDF Downloads 106
389 Study of the Impact of Quality Management System on Chinese Baby Dairy Product Industries

Authors: Qingxin Chen, Liben Jiang, Andrew Smith, Karim Hadjri

Abstract:

Since 2007, the Chinese food industry has undergone serious food contamination in the baby dairy industry, especially milk powder contamination. One of the milk powder products was found to contain melamine and a significant number (294,000) of babies were affected by kidney stones. Due to growing concerns among consumers about food safety and protection, and high pressure from central government, companies must take radical action to ensure food quality protection through the use of an appropriate quality management system. Previously, though researchers have investigated the health and safety aspects of food industries and products, quality issues concerning food products in China have been largely over-looked. Issues associated with baby dairy products and their quality issues have not been discussed in depth. This paper investigates the impact of quality management systems on the Chinese baby dairy product industry. A literature review was carried out to analyse the use of quality management systems within the Chinese milk power market. Moreover, quality concepts, relevant standards, laws, regulations and special issues (such as Melamine, Flavacin M1 contamination) have been analysed in detail. A qualitative research approach is employed, whereby preliminary analysis was conducted by interview, and data analysis based on interview responses from four selected Chinese baby dairy product companies was carried out. Through the analysis of literature review and data findings, it has been revealed that for quality management system that has been designed by many practitioners, many theories, models, conceptualisation, and systems are present. These standards and procedures should be followed in order to provide quality products to consumers, but the implementation is lacking in the Chinese baby dairy industry. Quality management systems have been applied by the selected companies but the implementation still needs improvement. For instance, the companies have to take measures to improve their processes and procedures with relevant standards. The government need to make more interventions and take a greater supervisory role in the production process. In general, this research presents implications for the regulatory bodies, Chinese Government and dairy food companies. There are food safety laws prevalent in China but they have not been widely practiced by companies. Regulatory bodies must take a greater role in ensuring compliance with laws and regulations. The Chinese government must also play a special role in urging companies to implement relevant quality control processes. The baby dairy companies not only have to accept the interventions from the regulatory bodies and government, they also need to ensure that production, storage, distribution and other processes will follow the relevant rules and standards.

Keywords: baby dairy product, food quality, milk powder contamination, quality management system

Procedia PDF Downloads 449
388 Hybrid Solutions in Physicochemical Processes for the Removal of Turbidity in Andean Reservoirs

Authors: María Cárdenas Gaudry, Gonzalo Ramces Fano Miranda

Abstract:

Sediment removal is very important in the purification of water, not only for reasons of visual perception but also because of its association with odor and taste problems. The Cuchoquesera reservoir, which is in the Andean region of Ayacucho (Peru) at an altitude of 3,740 meters above sea level, visually presents suspended particles and organic impurities indicating that it contains water of dubious quality to deduce that it is suitable for direct consumption of human beings. In order to quantitatively know the degree of impurities, water quality monitoring was carried out from February to August 2018, in which four sampling stations were established in the reservoir. The selected measured parameters were electrical conductivity, total dissolved solids, pH, color, turbidity, and sludge volume. The indicators of the studied parameters exceed the permissible limits except for electrical conductivity (190 μS/cm) and total dissolved solids (255 mg/L). In this investigation, the best combination and the optimal doses of reagents were determined that allowed the removal of sediments from the waters of the Cuchoquesera reservoir, through the physicochemical process of coagulation-flocculation. In order to improve this process during the rainy season, six combinations of reagents were evaluated, made up of three coagulants (ferric chloride, ferrous sulfate, and aluminum sulfate) and two natural flocculants: prickly pear powder (Opuntia ficus-indica) and tara gum (Caesalpinia spinoza). For each combination of reagents, jar tests were developed following the central composite experimental design (CCED), where the design factors were the doses of coagulant and flocculant and the initial turbidity. The results of the jar tests were adjusted to mathematical models, obtaining that to treat the water from the Cuchoquesera reservoir, with a turbidity of 150 UTN and a color of 137 U Pt-Co, 27.9 mg/L of the coagulant aluminum sulfate with 3 mg/L of the natural tara gum flocculant to produce a purified water quality of 1.7 UTN of turbidity and 3.2 U Pt-Co of apparent color. The estimated cost of the dose of coagulant and flocculant found was 0.22 USD/m³. This is how “grey-green” technologies can be used as a combination in nature-based solutions in water treatment, in this case, to achieve potability, making it more sustainable, especially economically, if green technology is available at the site of application of the nature-based hybrid solution. This research is a demonstration of the compatibility of natural coagulants/flocculants with other treatment technologies in the integrated/hybrid treatment process, such as the possibility of hybridizing natural coagulants with other types of coagulants.

Keywords: prickly pear powder, tara gum, nature-based solutions, aluminum sulfate, jar test, turbidity, coagulation, flocculation

Procedia PDF Downloads 82
387 Early Childhood Education for Bilingual Children: A Cross-Cultural Examination

Authors: Dina C. Castro, Rossana Boyd, Eugenia Papadaki

Abstract:

Immigration within and across continents is currently a global reality. The number of people leaving their communities in search for a better life for them and their families has increased dramatically during the last twenty years. Therefore, young children of the 21st century around the World are growing up in diverse communities, exposed to many languages and cultures. One consequence of these migration movements is the increased linguistic diversity in school settings. Depending on the linguistic history and the status of languages in the communities (i.e., minority-majority; majority-majority) the instructional approaches will differ. This session will discuss how bilingualism is addressed in early education programs in both minority-majority and majority-majority language communities, analyzing experiences in three countries with very distinct societal and demographic characteristics: Peru (South America), the United States (North America), and Italy (European Union). The ultimate goal is to identify commonalities and differences across the three experiences that could lead to a discussion of bilingualism in early education from a global perspective. From Peru, we will discuss current national language and educational policies that have lead to the design and implementation of bilingual and intercultural education for children in indigenous communities. We will also discuss how those practices are being implemented in preschool programs, the progress made and challenges encountered. From the United States, we will discuss the early education of Spanish-English bilingual preschoolers, including the national policy environment, as well as variations in language of instruction approaches currently being used with these children. From Italy, we will describe early education practices in the Bilingual School of Monza, in northern Italy, a school that has 20 years promoting bilingualism and multilingualism in education. While the presentations from Peru and the United States will discuss bilingualism in a majority-minority language environment, this presentation will lead to a discussion on the opportunities and challenges of promoting bilingualism in a majority-majority language environment. It is evident that innovative models and policies are necessary to prevent inequality of opportunities for bilingual children beginning in their earliest years. The cross-cultural examination of bilingual education experiences for young children in three part of the World will allow us to learn from our success and challenges. The session will end with a discussion of the following question: To what extent are early care and education programs being effective in promoting positive development and learning among all children, including those from diverse language, ethnic and cultural backgrounds? We expect to identify, with participants to our session, a set of recommendations for policy and program development that could ensure access to high quality early education for all bilingual children.

Keywords: early education for bilingual children, global perspectives in early education, cross-cultural, language policies

Procedia PDF Downloads 277
386 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses

Authors: André Jesus, Yanjie Zhu, Irwanda Laory

Abstract:

Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.

Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process

Procedia PDF Downloads 306
385 From Design, Experience and Play Framework to Common Design Thinking Tools: Using Serious Modern Board Games

Authors: Micael Sousa

Abstract:

Board games (BGs) are thriving as new designs emerge from the hobby community to greater audiences all around the world. Although digital games are gathering most of the attention in game studies and serious games research fields, the post-digital movement helps to explain why in the world dominated by digital technologies, the analog experiences are still unique and irreplaceable to users, allowing innovation in new hybrid environments. The BG’s new designs are part of these post-digital and hybrid movements because they result from the use of powerful digital tools that enable the production and knowledge sharing about the BGs and their face-to-face unique social experiences. These new BGs, defined as modern by many authors, are providing innovative designs and unique game mechanics that are still not yet fully explored by the main serious games (SG) approaches. Even the most established frameworks settled to address SG, as fun games implemented to achieve predefined goals need more development, especially when considering modern BGs. Despite the many anecdotic perceptions, researchers are only now starting to rediscover BGs and demonstrating their potentials. They are proving that BGs are easy to adapt and to grasp by non-expert players in experimental approaches, with the possibility of easy-going adaptation to players’ profiles and serious objectives even during gameplay. Although there are many design thinking (DT) models and practices, their relations with SG frameworks are also underdeveloped, mostly because this is a new research field, lacking theoretical development and the systematization of the experimental practices. Using BG as case studies promise to help develop these frameworks. Departing from the Design, Experience, and Play (DPE) framework and considering the Common Design Think Tools (CDST), this paper proposes a new experimental framework for the adaptation and development of modern BG design for DT: the Design, Experience, and Play for Think (DPET) experimental framework. This is done through the systematization of the DPE and CDST approaches applied in two case studies, where two different sequences of adapted BG were employed to establish a DT collaborative process. These two sessions occurred with different participants and in different contexts, also using different sequences of games for the same DT approach. The first session took place at the Faculty of Economics at the University of Coimbra in a training session of serious games for project development. The second session took place in the Casa do Impacto through The Great Village Design Jam light. Both sessions had the same duration and were designed to progressively achieve DT goals, using BGs as SGs in a collaborative process. The results from the sessions show that a sequence of BGs, when properly adapted to address the DPET framework, can generate a viable and innovative process of collaborative DT that is productive, fun, and engaging. The DPET proposed framework intents to help establish how new SG solutions could be defined for new goals through flexible DT. Applications in other areas of research and development can also benefit from these findings.

Keywords: board games, design thinking, methodology, serious games

Procedia PDF Downloads 91
384 An Integrated HCV Testing Model as a Method to Improve Identification and Linkage to Care in a Network of Community Health Centers in Philadelphia, PA

Authors: Catelyn Coyle, Helena Kwakwa

Abstract:

Objective: As novel and better tolerated therapies become available, effective HCV testing and care models become increasingly necessary to not only identify individuals with active infection but also link them to HCV providers for medical evaluation and treatment. Our aim is to describe an effective HCV testing and linkage to care model piloted in a network of five community health centers located in Philadelphia, PA. Methods: In October 2012, National Nursing Centers Consortium piloted a routine opt-out HCV testing model in a network of community health centers, one of which treats HCV, HIV, and co-infected patients. Key aspects of the model were medical assistant initiated testing, the use of laboratory-based reflex test technology, and electronic medical record modifications to prompt, track, report and facilitate payment of test costs. Universal testing on all adult patients was implemented at health centers serving patients at high-risk for HCV. The other sites integrated high-risk based testing, where patients meeting one or more of the CDC testing recommendation risk factors or had a history of homelessness were eligible for HCV testing. Mid-course adjustments included the integration of dual HIV testing, development of a linkage to care coordinator position to facilitate the transition of HIV and/or HCV-positive patients from primary to specialist care, and the transition to universal HCV testing across all testing sites. Results: From October 2012 to June 2015, the health centers performed 7,730 HCV tests and identified 886 (11.5%) patients with a positive HCV-antibody test. Of those with positive HCV-antibody tests, 838 (94.6%) had an HCV-RNA confirmatory test and 590 (70.4%) progressed to current HCV infection (overall prevalence=7.6%); 524 (88.8%) received their RNA-positive test result; 429 (72.7%) were referred to an HCV care specialist and 271 (45.9%) were seen by the HCV care specialist. The best linkage to care results were seen at the test and treat the site, where of the 333 patients were current HCV infection, 175 (52.6%) were seen by an HCV care specialist. Of the patients with active HCV infection, 349 (59.2%) were unaware of their HCV-positive status at the time of diagnosis. Since the integration of dual HCV/HIV testing in September 2013, 9,506 HIV tests were performed, 85 (0.9%) patients had positive HIV tests, 81 (95.3%) received their confirmed HIV test result and 77 (90.6%) were linked to HIV care. Dual HCV/HIV testing increased the number of HCV tests performed by 362 between the 9 months preceding dual testing and first 9 months after dual testing integration, representing a 23.7% increment. Conclusion: Our HCV testing model shows that integrated routine testing and linkage to care is feasible and improved detection and linkage to care in a primary care setting. We found that prevalence of current HCV infection was higher than that seen in locally in Philadelphia and nationwide. Intensive linkage services can increase the number of patients who successfully navigate the HCV treatment cascade. The linkage to care coordinator position is an important position that acts as a trusted intermediary for patients being linked to care.

Keywords: HCV, routine testing, linkage to care, community health centers

Procedia PDF Downloads 335
383 Human Beta Defensin 1 as Potential Antimycobacterial Agent against Active and Dormant Tubercle Bacilli

Authors: Richa Sharma, Uma Nahar, Sadhna Sharma, Indu Verma

Abstract:

Counteracting the deadly pathogen Mycobacterium tuberculosis (M. tb) effectively is still a global challenge. Scrutinizing alternative weapons like antimicrobial peptides to strengthen existing tuberculosis artillery is urgently required. Considering the antimycobacterial potential of Human Beta Defensin 1 (HBD-1) along with isoniazid, the present study was designed to explore the ability of HBD-1 to act against active and dormant M. tb. HBD-1 was screened in silico using antimicrobial peptide prediction servers to identify its short antimicrobial motif. The activity of both HBD-1 and its selected motif (Pep B) was determined at different concentrations against actively growing M. tb in vitro and ex vivo in monocyte derived macrophages (MDMs). Log phase M. tb was grown along with HBD-1 and Pep B for 7 days. M. tb infected MDMs were treated with HBD-1 and Pep B for 72 hours. Thereafter, colony forming unit (CFU) enumeration was performed to determine activity of both peptides against actively growing in vitro and intracellular M. tb. The dormant M. tb models were prepared by following two approaches and treated with different concentrations of HBD-1 and Pep B. Firstly, 20-22 days old M. tbH37Rv was grown in potassium deficient Sauton media for 35 days. The presence of dormant bacilli was confirmed by Nile red staining. Dormant bacilli were further treated with rifampicin, isoniazid, HBD-1 and its motif for 7 days. The effect of both peptides on latent bacilli was assessed by colony forming units (CFU) and most probable number (MPN) enumeration. Secondly, human PBMC granuloma model was prepared by infecting PBMCs seeded on collagen matrix with M. tb(MOI 0.1) for 10 days. Histopathology was done to confirm granuloma formation. The granuloma thus formed was incubated for 72 hours with rifampicin, HBD-1 and Pep B individually. Difference in bacillary load was determined by CFU enumeration. The minimum inhibitory concentrations of HBD-1 and Pep B restricting growth of mycobacteria in vitro were 2μg/ml and 20μg/ml respectively. The intracellular mycobacterial load was reduced significantly by HBD-1 and Pep B at 1μg/ml and 5μg/ml respectively. Nile red positive bacterial population, high MPN/ low CFU count and tolerance to isoniazid, confirmed the formation of potassium deficienybaseddormancy model. HBD-1 (8μg/ml) showed 96% and 99% killing and Pep B (40μg/ml) lowered dormant bacillary load by 68.89% and 92.49% based on CFU and MPN enumeration respectively. Further, H&E stained aggregates of macrophages and lymphocytes, acid fast bacilli surrounded by cellular aggregates and rifampicin resistance, indicated the formation of human granuloma dormancy model. HBD-1 (8μg/ml) led to 81.3% reduction in CFU whereas its motif Pep B (40μg/ml) showed only 54.66% decrease in bacterial load inside granuloma. Thus, the present study indicated that HBD-1 and its motif are effective antimicrobial players against both actively growing and dormant M. tb. They should be further explored to tap their potential to design a powerful weapon for combating tuberculosis.

Keywords: antimicrobial peptides, dormant, human beta defensin 1, tuberculosis

Procedia PDF Downloads 242
382 The Misuse of Free Cash and Earnings Management: An Analysis of the Extent to Which Board Tenure Mitigates Earnings Management

Authors: Michael McCann

Abstract:

Managerial theories propose that, in joint stock companies, executives may be tempted to waste excess free cash on unprofitable projects to keep control of resources. In order to conceal their projects' poor performance, they may seek to engage in earnings management. On the one hand, managers may manipulate earnings upwards in order to post ‘good’ performances and safeguard their position. On the other, since managers pursuit of unrewarding investments are likely to lead to low long-term profitability, managers will use negative accruals to reduce current year’s earnings, smoothing earnings over time in order to conceal the negative effects. Agency models argue that boards of directors are delegated by shareholders to ensure that companies are governed properly. Part of that responsibility is ensuring the reliability of financial information. Analyses of the impact of board characteristics, particularly board independence on the misuse of free cash flow and earnings management finds conflicting evidence. However, existing characterizations of board independence do not account for such directors gaining firm-specific knowledge over time, influencing their monitoring ability. Further, there is little analysis of the influence of the relative experience of independent directors and executives on decisions surrounding the use of free cash. This paper contributes to this literature regarding the heterogeneous characteristics of boards by investigating the influence of independent director tenure on earnings management and the relative tenures of independent directors and Chief Executives. A balanced panel dataset comprising 51 companies across 11 annual periods from 2005 to 2015 is used for the analysis. In each annual period, firms were classified as conducting earnings management if they had discretionary accruals in the bottom quartile (downwards) and top quartile (upwards) of the distributed values for the sample. Logistical regressions were conducted to determine the marginal impact of independent board tenure and a number of control variables on the probability of conducting earnings management. The findings indicate that both absolute and relative measures of board independence and experience do not have a significant impact on the likelihood of earnings management. It is the level of free cash flow which is the major influence on the probability of earnings management. Higher free cash flow increases the probability of earnings management significantly. The research also investigates whether board monitoring of earnings management is contingent on the level of free cash flow. However, the results suggest that board monitoring is not amplified when free cash flow is higher. This suggests that the extent of earnings management in companies is determined by a range of company, industry and situation-specific factors.

Keywords: corporate governance, boards of directors, agency theory, earnings management

Procedia PDF Downloads 202
381 Analysis of Digital Transformation in Banking: The Hungarian Case

Authors: Éva Pintér, Péter Bagó, Nikolett Deutsch, Miklós Hetényi

Abstract:

The process of digital transformation has a profound influence on all sectors of the worldwide economy and the business environment. The influence of blockchain technology can be observed in the digital economy and e-government, rendering it an essential element of a nation's growth strategy. The banking industry is experiencing significant expansion and development of financial technology firms. Utilizing developing technologies such as artificial intelligence (AI), machine learning (ML), and big data (BD), these entrants are offering more streamlined financial solutions, promptly addressing client demands, and presenting a challenge to incumbent institutions. The advantages of digital transformation are evident in the corporate realm, and firms that resist its adoption put their survival at risk. The advent of digital technologies has revolutionized the business environment, streamlining processes and creating opportunities for enhanced communication and collaboration. Thanks to the aid of digital technologies, businesses can now swiftly and effortlessly retrieve vast quantities of information, all the while accelerating the process of creating new and improved products and services. Big data analytics is generally recognized as a transformative force in business, considered the fourth paradigm of science, and seen as the next frontier for innovation, competition, and productivity. Big data, an emerging technology that is shaping the future of the banking sector, offers numerous advantages to banks. It enables them to effectively track consumer behavior and make informed decisions, thereby enhancing their operational efficiency. Banks may embrace big data technologies to promptly and efficiently identify fraud, as well as gain insights into client preferences, which can then be leveraged to create better-tailored products and services. Moreover, the utilization of big data technology empowers banks to develop more intelligent and streamlined models for accurately recognizing and focusing on the suitable clientele with pertinent offers. There is a scarcity of research on big data analytics in the banking industry, with the majority of existing studies only examining the advantages and prospects associated with big data. Although big data technologies are crucial, there is a dearth of empirical evidence about the role of big data analytics (BDA) capabilities in bank performance. This research addresses a gap in the existing literature by introducing a model that combines the resource-based view (RBV), the technical organization environment framework (TOE), and dynamic capability theory (DC). This study investigates the influence of Big Data Analytics (BDA) utilization on the performance of market and risk management. This is supported by a comparative examination of Hungarian mobile banking services.

Keywords: big data, digital transformation, dynamic capabilities, mobile banking

Procedia PDF Downloads 25
380 Modeling the Relation between Discretionary Accrual Earnings Management, International Financial Reporting Standards and Corporate Governance

Authors: Ikechukwu Ndu

Abstract:

This study examines the econometric modeling of the relation between discretionary accrual earnings management, International Financial Reporting Standards (IFRS), and certain corporate governance factors with regard to listed Nigerian non-financial firms. Although discretionary accrual earnings management is a well-known and global problem that has an adverse impact on users of the financial statements, its relationship with IFRS and corporate governance is neither adequately researched nor properly systematically investigated in Nigeria. The dearth of research in the relation between discretionary accrual earnings management, IFRS and corporate governance in Nigeria has made it difficult for academics, practitioners, government setting bodies, regulators and international bodies to achieve a clearer understanding of how discretionary accrual earnings management relates to IFRS and certain corporate governance characteristics. This is the first study to the author’s best knowledge to date that makes interesting research contributions that significantly add to the literature of discretionary accrual earnings management and its relation with corporate governance and IFRS pertaining to the Nigerian context. A comprehensive review is undertaken of the literature of discretionary total accrual earnings management, IFRS, and certain corporate governance characteristics as well as the data, models, methodologies, and different estimators used in the study. Secondary financial statement, IFRS, and corporate governance data are sourced from Bloomberg database and published financial statements of Nigerian non-financial firms for the period 2004 to 2016. The methodology uses both the total and working capital accrual basis. This study has a number of interesting preliminary findings. First, there is a negative relationship between the level of discretionary accrual earnings management and the adoption of IFRS. However, this relationship does not appear to be statistically significant. Second, there is a significant negative relationship between the size of the board of directors and discretionary accrual earnings management. Third, CEO Separation of roles does not constrain earnings management, indicating the need to preserve relationships, personal connections, and maintain bonded friendships between the CEO, Chairman, and executive directors. Fourth, there is a significant negative relationship between discretionary accrual earnings management and the use of a Big Four firm as an auditor. Fifth, including shareholders in the audit committee, leads to a reduction in discretionary accrual earnings management. Sixth, the debt and return on assets (ROA) variables are significant and positively related to discretionary accrual earnings management. Finally, the company size variable indicated by the log of assets is surprisingly not found to be statistically significant and indicates that all Nigerian companies irrespective of size engage in discretionary accrual management. In conclusion, this study provides key insights that enable a better understanding of the relationship between discretionary accrual earnings management, IFRS, and corporate governance in the Nigerian context. It is expected that the results of this study will be of interest to academics, practitioners, regulators, governments, international bodies and other parties involved in policy setting and economic development in areas of financial reporting, securities regulation, accounting harmonization, and corporate governance.

Keywords: discretionary accrual earnings management, earnings manipulation, IFRS, corporate governance

Procedia PDF Downloads 112
379 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin

Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa

Abstract:

Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.

Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®

Procedia PDF Downloads 100
378 Influence of Glass Plates Different Boundary Conditions on Human Impact Resistance

Authors: Alberto Sanchidrián, José A. Parra, Jesús Alonso, Julián Pecharromán, Antonia Pacios, Consuelo Huerta

Abstract:

Glass is a commonly used material in building; there is not a unique design solution as plates with a different number of layers and interlayers may be used. In most façades, a security glazing have to be used according to its performance in the impact pendulum. The European Standard EN 12600 establishes an impact test procedure for classification under the point of view of the human security, of flat plates with different thickness, using a pendulum of two tires and 50 kg mass that impacts against the plate from different heights. However, this test does not replicate the actual dimensions and border conditions used in building configurations and so the real stress distribution is not determined with this test. The influence of different boundary conditions, as the ones employed in construction sites, is not well taking into account when testing the behaviour of safety glazing and there is not a detailed procedure and criteria to determinate the glass resistance against human impact. To reproduce the actual boundary conditions on site, when needed, the pendulum test is arranged to be used "in situ", with no account for load control, stiffness, and without a standard procedure. Fracture stress of small and large glass plates fit a Weibull distribution with quite a big dispersion so conservative values are adopted for admissible fracture stress under static loads. In fact, test performed for human impact gives a fracture strength two or three times higher, and many times without a total fracture of the glass plate. Newest standards, as for example DIN 18008-4, states for an admissible fracture stress 2.5 times higher than the ones used for static and wing loads. Now two working areas are open: a) to define a standard for the ‘in situ’ test; b) to prepare a laboratory procedure that allows testing with more real stress distribution. To work on both research lines a laboratory that allows to test medium size specimens with different border conditions, has been developed. A special steel frame allows reproducing the stiffness of the glass support substructure, including a rigid condition used as reference. The dynamic behaviour of the glass plate and its support substructure have been characterized with finite elements models updated with modal tests results. In addition, a new portable impact machine is being used to get enough force and direction control during the impact test. Impact based on 100 J is used. To avoid problems with broken glass plates, the test have been done using an aluminium plate of 1000 mm x 700 mm size and 10 mm thickness supported on four sides; three different substructure stiffness conditions are used. A detailed control of the dynamic stiffness and the behaviour of the plate is done with modal tests. Repeatability of the test and reproducibility of results prove that procedure to control both, stiffness of the plate and the impact level, is necessary.

Keywords: glass plates, human impact test, modal test, plate boundary conditions

Procedia PDF Downloads 283
377 Hygro-Thermal Modelling of Timber Decks

Authors: Stefania Fortino, Petr Hradil, Timo Avikainen

Abstract:

Timber bridges have an excellent environmental performance, are economical, relatively easy to build and can have a long service life. However, the durability of these bridges is the main problem because of their exposure to outdoor climate conditions. The moisture content accumulated in wood for long periods, in combination with certain temperatures, may cause conditions suitable for timber decay. In addition, moisture content variations affect the structural integrity, serviceability and loading capacity of timber bridges. Therefore, the monitoring of the moisture content in wood is important for the durability of the material but also for the whole superstructure. The measurements obtained by the usual sensor-based techniques provide hygro-thermal data only in specific locations of the wood components. In this context, the monitoring can be assisted by numerical modelling to get more information on the hygro-thermal response of the bridges. This work presents a hygro-thermal model based on a multi-phase moisture transport theory to predict the distribution of moisture content, relative humidity and temperature in wood. Below the fibre saturation point, the multi-phase theory simulates three phenomena in cellular wood during moisture transfer, i.e., the diffusion of water vapour in the pores, the sorption of bound water and the diffusion of bound water in the cell walls. In the multi-phase model, the two water phases are separated, and the coupling between them is defined through a sorption rate. Furthermore, an average between the temperature-dependent adsorption and desorption isotherms is used. In previous works by some of the authors, this approach was found very suitable to study the moisture transport in uncoated and coated stress-laminated timber decks. Compared to previous works, the hygro-thermal fluxes on the external surfaces include the influence of the absorbed solar radiation during the time and consequently, the temperatures on the surfaces exposed to the sun are higher. This affects the whole hygro-thermal response of the timber component. The multi-phase model, implemented in a user subroutine of Abaqus FEM code, provides the distribution of the moisture content, the temperature and the relative humidity in a volume of the timber deck. As a case study, the hygro-thermal data in wood are collected from the ongoing monitoring of the stress-laminated timber deck of Tapiola Bridge in Finland, based on integrated humidity-temperature sensors and the numerical results are found in good agreement with the measurements. The proposed model, used to assist the monitoring, can contribute to reducing the maintenance costs of bridges, as well as the cost of instrumentation, and increase safety.

Keywords: moisture content, multi-phase models, solar radiation, timber decks, FEM

Procedia PDF Downloads 143
376 Budgetary Performance Model for Managing Pavement Maintenance

Authors: Vivek Hokam, Vishrut Landge

Abstract:

An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.

Keywords: budget, maintenance, deterioration, priority

Procedia PDF Downloads 173
375 Increasing Recoverable Oil in Northern Afghanistan Kashkari Oil Field by Low-Salinity Water Flooding

Authors: Zabihullah Mahdi, Khwaja Naweed Seddiqi

Abstract:

Afghanistan is located in a tectonically complex and dynamic area, surrounded by rocks that originated on the mother continent of Gondwanaland. The northern Afghanistan basin, which runs along the country's northern border, has the potential for petroleum generation and accumulation. The Amu Darya basin has the largest petroleum potential in the region. Sedimentation occurred in the Amu Darya basin from the Jurassic to the Eocene epochs. Kashkari oil field is located in northern Afghanistan's Amu Darya basin. The field structure consists of a narrow northeast-southwest (NE-SW) anticline with two structural highs, the northwest limb being mild and the southeast limb being steep. The first oil production well in the Kashkari oil field was drilled in 1976, and a total of ten wells were drilled in the area between 1976 and 1979. The amount of original oil in place (OOIP) in the Kashkari oil field, based on the results of surveys and calculations conducted by research institutions, is estimated to be around 140 MMbbls. The objective of this study is to increase recoverable oil reserves in the Kashkari oil field through the implementation of low-salinity water flooding (LSWF) enhanced oil recovery (EOR) technique. The LSWF involved conducting a core flooding laboratory test consisting of four sequential steps with varying salinities. The test commenced with the use of formation water (FW) as the initial salinity, which was subsequently reduced to a salinity level of 0.1%. Afterward, the numerical simulation model of core scale oil recovery by LSWF was designed by Computer Modelling Group’s General Equation Modeler (CMG-GEM) software to evaluate the applicability of the technology to the field scale. Next, the Kahskari oil field simulation model was designed, and the LSWF method was applied to it. To obtain reasonable results, laboratory settings (temperature, pressure, rock, and oil characteristics) are designed as far as possible based on the condition of the Kashkari oil field, and several injection and production patterns are investigated. The relative permeability of oil and water in this study was obtained using Corey’s equation. In the Kashkari oilfield simulation model, three models: 1. Base model (with no water injection), 2. FW injection model, and 3. The LSW injection model was considered for the evaluation of the LSWF effect on oil recovery. Based on the results of the LSWF laboratory experiment and computer simulation analysis, the oil recovery increased rapidly after the FW was injected into the core. Subsequently, by injecting 1% salinity water, a gradual increase of 4% oil can be observed. About 6.4% of the field is produced by the application of the LSWF technique. The results of LSWF (salinity 0.1%) on the Kashkari oil field suggest that this technology can be a successful method for developing Kashkari oil production.

Keywords: low-salinity water flooding, immiscible displacement, Kashkari oil field, two-phase flow, numerical reservoir simulation model

Procedia PDF Downloads 4
374 Development of a Finite Element Model of the Upper Cervical Spine to Evaluate the Atlantoaxial Fixation Techniques

Authors: Iman Zafarparandeh, Muzammil Mumtaz, Paniz Taherzadeh, Deniz Erbulut

Abstract:

The instability in the atlantoaxial joint may occur due to cervical surgery, congenital anomalies, and trauma. There are different types of fixation techniques proposed for restoring the stability and preventing harmful neurological deterioration. Application of the screw constructs has become a popular alternative to the older techniques for stabilizing the joint. The main difference between the various screw constructs is the type of the screw which can be lateral mass screw, pedicle screw, transarticular screw, and translaminar screw. The aim of this paper is to study the effect of three popular screw constructs fixation techniques on the biomechanics of the atlantoaxial joint using the finite element (FE) method. A three-dimensional FE model of the upper cervical spine including the skull, C1 and C2 vertebrae, and groups of the existing ligaments were developed. The accurate geometry of the model was obtained from the CT data of a 35-year old male. Three screw constructs were designed to compare; Magerl transarticular screw (TA-Screw), Goel-Harms lateral mass screw and pedicle screw (LM-Screw and Pedicle-Screw), and Wright lateral mass screw and translaminar screw (LM-Screw and TL-Screw). Pure moments were applied to the model in the three main planes; flexion (Flex), extension (Ext), axial rotation (AR) and lateral bending (LB). The range of motion (ROM) of C0-C1 and C1-C2 segments for the implanted FE models are compared to the intact FE model and the in vitro study of Panjabi (1988). The Magerl technique showed less effect on the ROM of C0-C1 than the other two techniques in sagittal plane. In lateral bending and axial rotation, the Goel-Harms and Wright techniques showed less effect on the ROM of C0-C1 than the Magerl technique. The Magerl technique has the highest fusion rate as 99% in all loading directions for the C1-C2 segment. The Wright technique has the lowest fusion rate in LB as 79%. The three techniques resulted in the same fusion rate in extension loading as 99%. The maximum stress for the Magerl technique is the lowest in all load direction compared to other two techniques. The maximum stress in all direction was 234 Mpa and occurred in flexion with the Wright technique. The maximum stress for the Goel-Harms and Wright techniques occurred in lateral mass screw. The ROM obtained from the FE results support this idea that the fusion rate of the Magerl is more than 99%. Moreover, the maximum stress occurred in each screw constructs proves the less failure possibility for the Magerl technique. Another advantage of the Magerl technique is the less number of components compared to other techniques using screw constructs. Despite the benefits of the Magerl technique, there are drawbacks to using this method such as reduction of the C1 and C2 before screw placement. Therefore, other fixation methods such as Goel-Harms and Wright techniques find the solution for the drawbacks of the Magerl technique by adding screws separately to C1 and C2. The FE model implanted with the Wright technique showed the highest maximum stress almost in all load direction.

Keywords: cervical spine, finite element model, atlantoaxial, fixation technique

Procedia PDF Downloads 360
373 The Role of Macroeconomic Condition and Volatility in Credit Risk: An Empirical Analysis of Credit Default Swap Index Spread on Structural Models in U.S. Market during Post-Crisis Period

Authors: Xu Wang

Abstract:

This research builds linear regressions of U.S. macroeconomic condition and volatility measures in the investment grade and high yield Credit Default Swap index spreads using monthly data from March 2009 to July 2016, to study the relationship between different dimensions of macroeconomy and overall credit risk quality. The most significant contribution of this research is systematically examining individual and joint effects of macroeconomic condition and volatility on CDX spreads by including macroeconomic time series that captures different dimensions of the U.S. economy. The industrial production index growth, non-farm payroll growth, consumer price index growth, 3-month treasury rate and consumer sentiment are introduced to capture the condition of real economic activity, employment, inflation, monetary policy and risk aversion respectively. The conditional variance of the macroeconomic series is constructed using ARMA-GARCH model and is used to measure macroeconomic volatility. The linear regression model is conducted to capture relationships between monthly average CDX spreads and macroeconomic variables. The Newey–West estimator is used to control for autocorrelation and heteroskedasticity in error terms. Furthermore, the sensitivity factor analysis and standardized coefficients analysis are conducted to compare the sensitivity of CDX spreads to different macroeconomic variables and to compare relative effects of macroeconomic condition versus macroeconomic uncertainty respectively. This research shows that macroeconomic condition can have a negative effect on CDX spread while macroeconomic volatility has a positive effect on determining CDX spread. Macroeconomic condition and volatility variables can jointly explain more than 70% of the whole variation of the CDX spread. In addition, sensitivity factor analysis shows that the CDX spread is the most sensitive to Consumer Sentiment index. Finally, the standardized coefficients analysis shows that both macroeconomic condition and volatility variables are important in determining CDX spread but macroeconomic condition category of variables have more relative importance in determining CDX spread than macroeconomic volatility category of variables. This research shows that the CDX spread can reflect the individual and joint effects of macroeconomic condition and volatility, which suggests that individual investors or government should carefully regard CDX spread as a measure of overall credit risk because the CDX spread is influenced by macroeconomy. In addition, the significance of macroeconomic condition and volatility variables, such as Non-farm Payroll growth rate and Industrial Production Index growth volatility suggests that the government, should pay more attention to the overall credit quality in the market when macroecnomy is low or volatile.

Keywords: autoregressive moving average model, credit spread puzzle, credit default swap spread, generalized autoregressive conditional heteroskedasticity model, macroeconomic conditions, macroeconomic uncertainty

Procedia PDF Downloads 145
372 Applying Biculturalism in Studying Tourism Host Community Cultural Integrity and Individual Member Stress

Authors: Shawn P. Daly

Abstract:

Communities heavily engaged in the tourism industry discover their values intersect, meld, and conflict with those of visitors. Maintaining cultural integrity in the face of powerful external pressures causes stress among society members. This effect represents a less studied aspect of sustainable tourism. The present paper brings a perspective unique to the tourism literature: biculturalism. The grounded theories, coherent hypotheses, and validated constructs and indicators of biculturalism represent a sound base from which to consider sociocultural issues in sustainable tourism. Five models describe the psychological state of individuals operating at cultural crossroads: assimilation (joining the new culture), acculturation (grasping the new culture but remaining of the original culture), alternation (varying behavior to cultural context), multicultural (maintaining distinct cultures), and fusion (blending cultures). These five processes divide into two units of analysis (individual and society), permitting research questions at levels important for considering sociocultural sustainability. Acculturation modelling has morphed into dual processes of acculturation (new culture adaptation) and enculturation (original culture adaptation). This dichotomy divides sustainability research questions into human impacts from assimilation (acquiring new culture, throwing away original), separation (rejecting new culture, keeping original), integration (acquiring new culture, keeping original), and marginalization (rejecting new culture, throwing away original). Biculturalism is often cast in terms of its emotional, behavioral, and cognitive dimensions. Required cultural adjustments and varying levels of cultural competence lead to physical, psychological, and emotional outcomes, including depression, lowered life satisfaction and self-esteem, headaches, and back pain—or enhanced career success, social skills, and life styles. Numerous studies provide empirical scales and research hypotheses for sustainability research into tourism’s causality and effect on local well-being. One key issue in applying biculturalism to sustainability scholarship concerns identification and specification of the alternative new culture contacting local culture. Evidence exists for tourism industry, universal tourist, and location/event-specific tourist culture. The biculturalism paradigm holds promise for researchers examining evolving cultural identity and integrity in response to mass tourism. In particular, confirmed constructs and scales simplify operationalization of tourism sustainability studies in terms of human impact and adjustment.

Keywords: biculturalism, cultural integrity, psychological and sociocultural adjustment, tourist culture

Procedia PDF Downloads 382
371 Influence of Structured Capillary-Porous Coatings on Cryogenic Quenching Efficiency

Authors: Irina P. Starodubtseva, Aleksandr N. Pavlenko

Abstract:

Quenching is a term generally accepted for the process of rapid cooling of a solid that is overheated above the thermodynamic limit of the liquid superheat. The main objective of many previous studies on quenching is to find a way to reduce the total time of the transient process. Computational experiments were performed to simulate quenching by a falling liquid nitrogen film of an extremely overheated vertical copper plate with a structured capillary-porous coating. The coating was produced by directed plasma spraying. Due to the complexities in physical pattern of quenching from chaotic processes to phase transition, the mechanism of heat transfer during quenching is still not sufficiently understood. To our best knowledge, no information exists on when and how the first stable liquid-solid contact occurs and how the local contact area begins to expand. Here we have more models and hypotheses than authentically established facts. The peculiarities of the quench front dynamics and heat transfer in the transient process are studied. The created numerical model determines the quench front velocity and the temperature fields in the heater, varying in space and time. The dynamic pattern of the running quench front obtained numerically satisfactorily correlates with the pattern observed in experiments. Capillary-porous coatings with straight and reverse orientation of crests are investigated. The results show that the cooling rate is influenced by thermal properties of the coating as well as the structure and geometry of the protrusions. The presence of capillary-porous coating significantly affects the dynamics of quenching and reduces the total quenching time more than threefold. This effect is due to the fact that the initialization of a quench front on a plate with a capillary-porous coating occurs at a temperature significantly higher than the thermodynamic limit of the liquid superheat, when a stable solid-liquid contact is thermodynamically impossible. Waves present on the liquid-vapor interface and protrusions on the complex micro-structured surface cause destabilization of the vapor film and the appearance of local liquid-solid micro-contacts even though the average integral surface temperature is much higher than the liquid superheat limit. The reliability of the results is confirmed by direct comparison with experimental data on the quench front velocity, the quench front geometry, and the surface temperature change over time. Knowledge of the quench front velocity and total time of transition process is required for solving practically important problems of nuclear reactors safety.

Keywords: capillary-porous coating, heat transfer, Leidenfrost phenomenon, numerical simulation, quenching

Procedia PDF Downloads 109
370 Students’ Opinions Related to Virtual Classrooms within the Online Distance Education Graduate Program

Authors: Secil Kaya Gulen

Abstract:

Face to face and virtual classrooms that came up with different conditions and environments, but similar purposes have different characteristics. Although virtual classrooms have some similar facilities with face-to-face classes such as program, students, and administrators, they have no walls and corridors. Therefore, students can attend the courses from a distance and can control their own learning spaces. Virtual classrooms defined as simultaneous online environments where students in different places come together at the same time with the guidance of a teacher. Distance education and virtual classes require different intellectual and managerial skills and models. Therefore, for effective use of virtual classrooms, the virtual property should be taken into consideration. One of the most important factors that affect the spread and effective use of the virtual classrooms is the perceptions and opinions of students -as one the main participants-. Student opinions and recommendations are important in terms of providing information about the fulfillment of expectation. This will help to improve the applications and contribute to the more efficient implementations. In this context, ideas and perceptions of the students related to the virtual classrooms, in general, were determined in this study. Advantages and disadvantages of virtual classrooms expected contributions to the educational system and expected characteristics of virtual classrooms have examined in this study. Students of an online distance education graduate program in which all the courses offered by virtual classrooms have asked for their opinions. Online Distance Education Graduate Program has totally 19 students. The questionnaire that consists of open-ended and multiple choice questions sent to these 19 students and finally 12 of them answered the questionnaire. Analysis of the data presented as frequencies and percentages for each item. SPSS for multiple-choice questions and Nvivo for open-ended questions were used for analyses. According to the results obtained by the analysis, participants stated that they did not get any training on virtual classes before the courses; but they emphasize that newly enrolled students should be educated about the virtual classrooms. In addition, all participants mentioned that virtual classroom contribute their personal development and they want to improve their skills by gaining more experience. The participants, who mainly emphasize the advantages of virtual classrooms, express that the dissemination of virtual classrooms will contribute to the Turkish Education System. Within the advantages of virtual classrooms, ‘recordable and repeatable lessons’ and ‘eliminating the access and transportation costs’ are most common advantages according to the participants. On the other hand, they mentioned ‘technological features and keyboard usage skills affect the attendance’ is the most common disadvantage. Participants' most obvious problem during virtual lectures is ‘lack of technical support’. Finally ‘easy to use’, ‘support possibilities’, ‘communication level’ and ‘flexibility’ come to the forefront in the scope of expected features of virtual classrooms. Last of all, students' opinions about the virtual classrooms seems to be generally positive. Designing and managing virtual classrooms according to the prioritized features will increase the students’ satisfaction and will contribute to improve applications that are more effective.

Keywords: distance education, virtual classrooms, higher education, e-learning

Procedia PDF Downloads 241
369 Enterprises and Social Impact: A Review of the Changing Landscape

Authors: Suzhou Wei, Isobel Cunningham, Laura Bradley McCauley

Abstract:

Social enterprises play a significant role in resolving social issues in the modern world. In contrast to traditional commercial businesses, their main goal is to address social concerns rather than primarily maximize profits. This phenomenon in entrepreneurship is presenting new opportunities and different operating models and resulting in modified approaches to measure success beyond traditional market share and margins. This paper explores social enterprises to clarify their roles and approaches in addressing grand challenges related to social issues. In doing so, it analyses the key differences between traditional business and social enterprises, such as their operating model and value proposition, to understand their contributions to society. The research presented in this paper responds to calls for research to better understand social enterprises and entrepreneurship but also to explore the dynamics between profit-driven and socially-oriented entities to deliver mutual benefits. This paper, which examines the features of commercial business, suggests their primary focus is profit generation, economic growth and innovation. Beyond the chase of profit, it highlights the critical role of innovation typical of successful businesses. This, in turn, promotes economic growth, creates job opportunities and makes a major positive impact on people's lives. In contrast, the motivations upon which social enterprises are founded relate to a commitment to address social problems rather than maximizing profits. These entities combine entrepreneurial principles with commitments to deliver social impact and grand challenge changes, creating a distinctive category within the broader enterprise and entrepreneurship landscape. The motivations for establishing a social enterprise are diverse, such as encompassing personal fulfillment, a genuine desire to contribute to society and a focus on achieving impactful accomplishments. The paper also discusses the collaboration between commercial businesses and social enterprises, which is viewed as a strategic approach to addressing grand challenges more comprehensively and effectively. Finally, this paper highlights the evolving and diverse expectations placed on all businesses to actively contribute to society beyond profit-making. We conclude that there is an unrealized and underdeveloped potential for collaboration between commercial businesses and social enterprises to produce greater and long-lasting social impacts. Overall, the aim of this research is to encourage more investigation of the complex relationship between economic and social objectives and contributions through a better understanding of how and why businesses might address social issues. Ultimately, the paper positions itself as a tool for understanding the evolving landscape of business engagement with social issues and advocates for collaborative efforts to achieve sustainable and impactful outcomes.

Keywords: business, social enterprises, collaboration, social issues, motivations

Procedia PDF Downloads 20
368 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 95
367 A Method Intensive Top-down Approach for Generating Guidelines for an Energy-Efficient Neighbourhood: A Case of Amaravati, Andhra Pradesh, India

Authors: Rituparna Pal, Faiz Ahmed

Abstract:

Neighbourhood energy efficiency is a newly emerged term to address the quality of urban strata of built environment in terms of various covariates of sustainability. The concept of sustainability paradigm in developed nations has encouraged the policymakers for developing urban scale cities to envision plans under the aegis of urban scale sustainability. The concept of neighbourhood energy efficiency is realized a lot lately just when the cities, towns and other areas comprising this massive global urban strata have started facing a strong blow from climate change, energy crisis, cost hike and an alarming shortfall in the justice which the urban areas required. So this step of urban sustainability can be easily referred more as a ‘Retrofit Action’ which is to cover up the already affected urban structure. So even if we start energy efficiency for existing cities and urban areas the initial layer remains, for which a complete model of urban sustainability still lacks definition. Urban sustainability is a broadly spoken off word with end number of parameters and policies through which the loop can be met. Out of which neighbourhood energy efficiency can be an integral part where the concept and index of neighbourhood scale indicators, block level indicators and building physics parameters can be understood, analyzed and concluded to help emerge guidelines for urban scale sustainability. The future of neighbourhood energy efficiency not only lies in energy efficiency but also important parameters like quality of life, access to green, access to daylight, outdoor comfort, natural ventilation etc. So apart from designing less energy-hungry buildings, it is required to create a built environment which will create less stress on buildings to consume more energy. A lot of literary analysis has been done in the Western countries prominently in Spain, Paris and also Hong Kong, leaving a distinct gap in the Indian scenario in exploring the sustainability at the urban strata. The site for the study has been selected in the upcoming capital city of Amaravati which can be replicated with similar neighbourhood typologies in the area. The paper suggests a methodical intent to quantify energy and sustainability indices in detail taking by involving several macro, meso and micro level covariates and parameters. Several iterations have been made both at macro and micro level and have been subjected to simulation, computation and mathematical models and finally to comparative analysis. Parameters at all levels are analyzed to suggest the best case scenarios which in turn is extrapolated to the macro level finally coming out with a proposal model for energy efficient neighbourhood and worked out guidelines with significance and correlations derived.

Keywords: energy quantification, macro scale parameters, meso scale parameters, micro scale parameters

Procedia PDF Downloads 153
366 A Quantitative Analysis of Rural to Urban Migration in Morocco

Authors: Donald Wright

Abstract:

The ultimate goal of this study is to reinvigorate the philosophical underpinnings the study of urbanization with scientific data with the goal of circumventing what seems an inevitable future clash between rural and urban populations. To that end urban infrastructure must be sustainable economically, politically and ecologically over the course of several generations as cities continue to grow with the incorporation of climate refugees. Our research will provide data concerning the projected increase in population over the coming two decades in Morocco, and the population will shift from rural areas to urban centers during that period of time. As a result, urban infrastructure will need to be adapted, developed or built to fit the demand of future internal migrations from rural to urban centers in Morocco. This paper will also examine how past experiences of internally displaced people give insight into the challenges faced by future migrants and, beyond the gathering of data, how people react to internal migration. This study employs four different sets of research tools. First, a large part of this study is archival, which involves compiling the relevant literature on the topic and its complex history. This step also includes gathering data bout migrations in Morocco from public data sources. Once the datasets are collected, the next part of the project involves populating the attribute fields and preprocessing the data to make it understandable and usable by machine learning algorithms. In tandem with the mathematical interpretation of data and projected migrations, this study benefits from a theoretical understanding of the critical apparatus existing around urban development of the 20th and 21st centuries that give us insight into past infrastructure development and the rationale behind it. Once the data is ready to be analyzed, different machine learning algorithms will be experimented (k-clustering, support vector regression, random forest analysis) and the results compared for visualization of the data. The final computational part of this study involves analyzing the data and determining what we can learn from it. This paper helps us to understand future trends of population movements within and between regions of North Africa, which will have an impact on various sectors such as urban development, food distribution and water purification, not to mention the creation of public policy in the countries of this region. One of the strengths of this project is the multi-pronged and cross-disciplinary methodology to the research question, which enables an interchange of knowledge and experiences to facilitate innovative solutions to this complex problem. Multiple and diverse intersecting viewpoints allow an exchange of methodological models that provide fresh and informed interpretations of otherwise objective data.

Keywords: climate change, machine learning, migration, Morocco, urban development

Procedia PDF Downloads 117
365 Spatial Accessibility Analysis of Kabul City Public Transport

Authors: Mohammad Idrees Yusofzai, Hirobata Yasuhiro, Matsuo Kojiro

Abstract:

Kabul is the capital of Afghanistan. It is the focal point of educational, industrial, etc. of Afghanistan. Additionally, the population of Kabul has grown recently and will increase because of return of refugees and shifting of people from other province to Kabul city. However, this increase in population, the issues of urban congestion and other related problems of urban transportation in Kabul city arises. One of the problems is public transport (large buses) service and needs to be modified and enhanced especially large bus routes that are operating in each zone of the 22 zone of Kabul City. To achieve the above mentioned goal of improving public transport, Spatial Accessibility Analysis is one of the important attributes to assess the effectiveness of transportation system and urban transport policy of a city, because accessibility indicator as an alternative tool to support public policy that aims the reinforcement of sustainable urban space. The case study of this research compares the present model (present bus route) and the modified model of public transport. Furthermore, present model, the bus routes in most of the zones are active, however, with having low frequency and unpublished schedule, and accessibility result is analyzed in four cases, based on the variables of accessibility. Whereas in modified model all zones in Kabul is taken into consideration with having specified origin and high frequency. Indeed the number of frequencies is kept high; however, this number is based on the number of buses Millie Bus Enterprise Authority (MBEA) owns. The same approach of cases is applied in modified model to figure out the best accessibility for the modified model. Indeed, the modified model is having a positive impact in congestion level in Kabul city. Besides, analyses of person trip and trip distribution have been also analyzed because how people move in the study area by each mode of transportation. So, the general aims of this research are to assess the present movement of people, identify zones in need of public transport and assess equity level of accessibility in Kabul city. The framework of methodology used in this research is based on gravity analysis model of accessibility; besides, generalized cost (time) of travel and travel mode is calculated. The main data come from person trip survey, socio-economic characteristics, demographic data by Japan International Cooperation Agency, 2008, study of Kabul city and also from the previous researches on travel pattern and the remaining data regarding present bus line and routes have been from MBEA. In conclusion, this research explores zones where public transport accessibility level is high and where it is low. It was found that both models the downtown area or central zones of Kabul city is having high level accessibility. Besides, the present model is the most unfavorable compared with the modified model based on the accessibility analysis.

Keywords: accessibility, bus generalized cost, gravity model, public transportation network

Procedia PDF Downloads 162
364 Bank Failures: A Question of Leadership

Authors: Alison L. Miles

Abstract:

Almost all major financial institutions in the world suffered losses due to the financial crisis of 2007, but the extent varied widely. The causes of the crash of 2007 are well documented and predominately focus on the role and complexity of the financial markets. The dominant theme of the literature suggests the causes of the crash were a combination of globalization, financial sector innovation, moribund regulation and short termism. While these arguments are undoubtedly true, they do not tell the whole story. A key weakness in the current analysis is the lack of consideration of those leading the banks pre and during times of crisis. This purpose of this study is to examine the possible link between the leadership styles and characteristics of the CEO, CFO and chairman and the financial institutions that failed or needed recapitalization. As such, it contributes to the literature and debate on international financial crises and systemic risk and also to the debate on risk management and regulatory reform in the banking sector. In order to first test the proposition (p1) that there are prevalent leadership characteristics or traits in financial institutions, an initial study was conducted using a sample of the top 65 largest global banks and financial institutions according to the Banker Top 1000 banks 2014. Secondary data from publically available and official documents, annual reports, treasury and parliamentary reports together with a selection of press articles and analyst meeting transcripts was collected longitudinally from the period 1998 to 2013. A computer aided key word search was used in order to identify the leadership styles and characteristics of the chairman, CEO and CFO. The results were then compared with the leadership models to form a picture of leadership in the sector during the research period. As this resulted in separate results that needed combining, SPSS data editor was used to aggregate the results across the studies using the variables ‘leadership style’ and ‘company financial performance’ together with the size of the company. In order to test the proposition (p2) that there was a prevalent leadership style in the banks that failed and the proposition (P3) that this was different to those that did not, further quantitative analysis was carried out on the leadership styles of the chair, CEO and CFO of banks that needed recapitalization, were taken over, or required government bail-out assistance during 2007-8. These included: Lehman Bros, Merrill Lynch, Royal Bank of Scotland, HBOS, Barclays, Northern Rock, Fortis and Allied Irish. The findings show that although regulatory reform has been a key mechanism of control of behavior in the banking sector, consideration of the leadership characteristics of those running the board are a key factor. They add weight to the argument that if each crisis is met with the same pattern of popular fury with the financier, increased regulation, followed by back to business as usual, the cycle of failure will always be repeated and show that through a different lens, new paradigms can be formed and future clashes avoided.

Keywords: banking, financial crisis, leadership, risk

Procedia PDF Downloads 298
363 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness

Authors: Keda Li, Hong Hu

Abstract:

Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.

Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb

Procedia PDF Downloads 66
362 The Food and Nutritional Effects of Smallholders’ Participation in Milk Value Chain in Ethiopia

Authors: Geday Elias, Montaigne Etienne, Padilla Martine, Tollossa Degefa

Abstract:

Smallholder farmers’ participation in agricultural value chain identified as a pathway to get out of poverty trap in Ethiopia. The smallholder dairy activities have a huge potential in poverty reduction through enhancing income, achieving food and nutritional security in the country. However, much less is known about the effects of smallholder’s participation in milk value chain on household food security and nutrition. This paper therefore, aims at evaluating the effects of smallholders’ participation in milk value chain on household food security taking in to account the four pillars of food security measurements (availability, access, utilization and stability). Using a semi-structured interview, a cross sectional farm household data collected from a randomly selected sample of 333 households (170 in Amhara and 163 in Oromia regions).Binary logit and propensity score matching( PSM) models are employed to examine the mechanisms through which smallholder’s participation in the milk value chain affects household food security where crop production, per capita calorie intakes, diet diversity score, and food insecurity access scale are used to measure food availability, access, utilization and stability respectively. Our findings reveal from 333 households, only 34.5% of smallholder farmers are participated in the milk value chain. Limited access to inputs and services, limited access to inputs markets and high transaction costs are key constraints for smallholders’ limited access to the milk value chain. To estimate the true average participation effects of milk value chain for participated households, the outcome variables (food security) of farm households who participated in milk value chain are compared with the outcome variables if the farm households had not participated. The PSM analysis reveals smallholder’s participation in milk value chain has a significant positive effect on household income, food security and nutrition. Smallholder farmers who are participated in milk chain are better by 15 quintals crops production and 73 percent of per capita calorie intakes in food availability and access respectively than smallholder farmers who are not participated in the market. Similarly, the participated households are better in dietary quality by 112 percents than non-participated households. Finally, smallholders’ who are participated in milk value chain are better in reducing household vulnerability to food insecurity by an average of 130 percent than non participated households. The results also shows income earned from milk value chain participation contributed to reduce capital’s constraints of the participated households’ by higher farm income and total household income by 5164 ETB and 14265 ETB respectively. This study therefore, confirms the potential role of smallholders’ participation in food value chain to get out of poverty trap through improving rural household income, food security and nutrition. Therefore, identified the determinants of smallholder participation in milk value chain and the participation effects on food security in the study areas are worth considering as a positive knock for policymakers and development agents to tackle the poverty trap in the study area in particular and in the country in general.

Keywords: effects, food security and nutrition, milk, participation, smallholders, value chain

Procedia PDF Downloads 313
361 Modelling of Groundwater Resources for Al-Najaf City, Iraq

Authors: Hayder H. Kareem, Shunqi Pan

Abstract:

Groundwater is a vital water resource in many areas in the world, particularly in the Middle-East region where the water resources become scarce and depleting. Sustainable management and planning of the groundwater resources become essential and urgent given the impact of the global climate change. In the recent years, numerical models have been widely used to predict the flow pattern and assess the water resources security, as well as the groundwater quality affected by the contaminants transported. In this study, MODFLOW is used to study the current status of groundwater resources and the risk of water resource security in the region centred at Al-Najaf City, which is located in the mid-west of Iraq and adjacent to the Euphrates River. In this study, a conceptual model is built using the geologic and hydrogeologic collected for the region, together with the Digital Elevation Model (DEM) data obtained from the "Global Land Cover Facility" (GLCF) and "United State Geological Survey" (USGS) for the study area. The computer model is also implemented with the distributions of 69 wells in the area with the steady pro-defined hydraulic head along its boundaries. The model is then applied with the recharge rate (from precipitation) of 7.55 mm/year, given from the analysis of the field data in the study area for the period of 1980-2014. The hydraulic conductivity from the measurements at the locations of wells is interpolated for model use. The model is calibrated with the measured hydraulic heads at the locations of 50 of 69 wells in the domain and results show a good agreement. The standard-error-of-estimate (SEE), root-mean-square errors (RMSE), Normalized RMSE and correlation coefficient are 0.297 m, 2.087 m, 6.899% and 0.971 respectively. Sensitivity analysis is also carried out, and it is found that the model is sensitive to recharge, particularly when the rate is greater than (15mm/year). Hydraulic conductivity is found to be another parameter which can affect the results significantly, therefore it requires high quality field data. The results show that there is a general flow pattern from the west to east of the study area, which agrees well with the observations and the gradient of the ground surface. It is found that with the current operational pumping rates of the wells in the area, a dry area is resulted in Al-Najaf City due to the large quantity of groundwater withdrawn. The computed water balance with the current operational pumping quantity shows that the Euphrates River supplies water into the groundwater of approximately 11759 m3/day, instead of gaining water of 11178 m3/day from the groundwater if no pumping from the wells. It is expected that the results obtained from the study can provide important information for the sustainable and effective planning and management of the regional groundwater resources for Al-Najaf City.

Keywords: Al-Najaf city, conceptual modelling, groundwater, unconfined aquifer, visual MODFLOW

Procedia PDF Downloads 178