Search results for: statistical approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16647

Search results for: statistical approach

15237 Design for Safety: Safety Consideration in Planning and Design of Airport Airsides

Authors: Maithem Al-Saadi, Min An

Abstract:

During airport planning and design stages, the major issues of capacity and safety in construction and operation of an airport need to be taken into consideration. The airside of an airport is a major and critical infrastructure that usually consists of runway(s), taxiway system, and apron(s) etc., which have to be designed according to the international standards and recommendations, and local limitations to accommodate the forecasted demands. However, in many cases, airport airsides are suffering from unexpected risks that occurred during airport operations. Therefore, safety risk assessment should be applied in the planning and design of airsides to cope with the probability of risks and their consequences, and to make decisions to reduce the risks to as low as reasonably practicable (ALARP) based on safety risk assessment. This paper presents a combination approach of Failure Modes, Effect, and Criticality Analysis (FMECA), Fuzzy Reasoning Approach (FRA), and Fuzzy Analytic Hierarchy Process (FAHP) to develop a risk analysis model for safety risk assessment. An illustrated example is used to the demonstrate risk assessment process on how the design of an airside in an airport can be analysed by using the proposed safety design risk assessment model.

Keywords: airport airside planning and design, design for safety, fuzzy reasoning approach, fuzzy AHP, risk assessment

Procedia PDF Downloads 349
15236 Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation

Authors: Hamed Alqahtani, Manolya Kavakli-Thorne

Abstract:

The large pose discrepancy is one of the critical challenges in face recognition during video surveillance. Due to the entanglement of pose attributes with identity information, the conventional approaches for pose-independent representation lack in providing quality results in recognizing largely posed faces. In this paper, we propose a practical approach to disentangle the pose attribute from the identity information followed by synthesis of a face using a classifier network in latent space. The proposed approach employs a modified generative adversarial network framework consisting of an encoder-decoder structure embedded with a classifier in manifold space for carrying out factorization on the latent encoding. It can be further generalized to other face and non-face attributes for real-life video frames containing faces with significant attribute variations. Experimental results and comparison with state of the art in the field prove that the learned representation of the proposed approach synthesizes more compelling perceptual images through a combination of adversarial and classification losses.

Keywords: disentanglement, face detection, generative adversarial networks, video surveillance

Procedia PDF Downloads 107
15235 The Role of the Injured Party's Fault in the Apportionment of Damages in Tort Law: A Comparative-Historical Study between Common Law and Islamic Law

Authors: Alireza Tavakoli Nia

Abstract:

In order to understand the role of the injured party's fault in dividing liability, we studied its historical background. In common law, the traditional contributory negligence rule was a complete defense. Then the legislature and judicial procedure modified that rule to one of apportionment. In Islamic law, too, the Action rule was at first used when the injured party was the sole cause, but jurists expanded the scope of this rule, so this rule was used in cases where both the injured party's fault and that of the other party are involved. There are some popular approaches for apportionment of damages. Some common law countries like Britain had chosen ‘the causal potency approach’ and ‘fixed apportionment’. Islamic countries like Iran have chosen both ‘the relative blameworthiness’ and ‘equal apportionment’ approaches. The article concludes that both common law and Islamic law believe in the division of responsibility between a wrongdoer claimant and the defendant. In contrast, in the apportionment of responsibility, Islamic law mostly believes in equal apportionment that is way easier and saves time and money, but common law legal systems have chosen the causal potency approach, which is more complicated than the rival approach but is fairer.

Keywords: contributory negligence, tort law, damage apportionment, common law, Islamic law

Procedia PDF Downloads 129
15234 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection

Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément

Abstract:

The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.

Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars

Procedia PDF Downloads 92
15233 Discursive Legitimation Strategies in ISIS’ Online Magazine, Dabiq: A Discourse Historical Approach

Authors: Sahar Rasoulikolamaki

Abstract:

ISIS (also known as DAASH) is an Islamic fundamentalist group that has been known as a global threat to the whole world for their radicalizing approach and application of online platforms as a tool to portray their activities, to disseminate their ideology, and to commit recruiting activities. This study is an attempt to carry out a critical discourse analysis on the argumentative devices by which ISIS legitimizes or delegitimizes positive or negative constructions of social practices in Dabiq. It tries to shed light on how texts in Dabiq as linguistic elements in the micro level of analysis relate to ISIS’ ideology as the higher-up macro level and in other words, how local structures contributed to the construction and transference of a global structure or ideology and vice versa. Therefore, following the relevant analytical frameworks, the study focuses on both micro-level of analysis of arguments (topoi) and macro-structure of legitimation and delegitimation in Dabiq. This purpose is nailed using the analytical categories and tools provided by Wodak’s Discourse Historical Approach (DHA) such as argumentation strategies (topoi), by which the coded language of legitimation/delegitimation and persuasion as used in Dabiq are explored. The ensuing findings demonstrate that Dabiq rigorously relies on the positive representation of the in-group course of actions and justifying its violence and, at the same time, the negative representation of the out-group behavior through implementing various topoi to achieve its desired outcome, which is the ideological manipulation and powerful self-depiction, as well as the supporter recruitment.

Keywords: argumentation, discourse-historical approach, ideology, legitimation and delegitimation, topoi

Procedia PDF Downloads 123
15232 Arabic as a Foreign Language in the Curriculum of Higher Education in Nigeria: Problems, Solutions, and Prospects

Authors: Kazeem Oluwatoyin Ajape

Abstract:

The study is concerned with the problem of how to improve the teaching of Arabic as a foreign language in Nigerian Higher Education System. The paper traces the historical background of Arabic education in Nigeria and also outlines the problems facing the language in Nigerian Institutions. It lays down some of the essential foundation work necessary for bringing about systematic and constructive improvements in the Teaching of Arabic as a Foreign Language (TAFL) by giving answers to the following research questions: what is the appropriate medium of instruction in teaching a foreign or second language? What is the position of English language in the teaching and learning of Arabic/Islamic education? What is the relevance of the present curriculum of Arabic /Islamic education in Nigerian institutions to the contemporary society? A survey of the literature indicates that a revolution is currently taking place in FL teaching and that a new approach known as the Communicative Approach (CA), has begun to emerge and influence the teaching of FLs in general, over the last decade or so. Since the CA is currently being adapted to the teaching of most major FLs and since this revolution has not yet had much impact on TAPL, the study explores the possibility of the application of the CA to the teaching of Arabic as a living language and also makes recommendations towards the development of the language in Nigerian Institutions of Higher Learning.

Keywords: Arabic Language, foreign language, Nigerian institutions, curriculum, communicative approach

Procedia PDF Downloads 588
15231 Social Identification among Employees: A System Dynamic Approach

Authors: Muhammad Abdullah, Salman Iqbal, Mamoona Rasheed

Abstract:

Social identity among people is an important source of pride and self-esteem, consequently, people struggle to preserve a positive perception of their groups and collectives. The purpose of this paper is to explain the process of social identification and to highlight the underlying causal factors of social identity among employees. There is a little research about how the social identity of employees is shaped in Pakistan’s organizational culture. This study is based on social identity theory. This study uses Systems’ approach as a research methodology. The feedback loop approach is applied to explain the underlying key elements of employee behavior that collectively form social identity among social groups in corporate arena. The findings of this study reveal that effective, evaluative and cognitive components of an individual’s personality are associated with the social identification. The system dynamic feedback loop approach has revealed the underlying structure that is associated with social identity, social group formation, and effective component proved to be the most associated factor. This may also enable to understand how social groups become stable and individuals act according to the group requirements. The value of this paper lies in the understanding gained about the underlying key factors that play a crucial role in social group formation in organizations. It may help to understand the rationale behind how employees socially categorize themselves within organizations. It may also help to design effective and more cohesive teams for better operations and long-term results. This may help to share knowledge among employees as well. The underlying structure behind the social identification is highlighted with the help of system modeling.

Keywords: affective commitment, cognitive commitment, evaluated commitment, system thinking

Procedia PDF Downloads 117
15230 Cognitive Function and Coping Behavior in the Elderly: A Population-Based Cross-Sectional Study

Authors: Ryo Shikimoto, Hidehito Niimura, Hisashi Kida, Kota Suzuki, Yukiko Miyasaka, Masaru Mimura

Abstract:

Introduction: In Japan, the most aged country in the world, it is important to explore predictive factors of cognitive function among the elderly. Coping behavior relieves chronic stress and improves lifestyle, and consequently may reduce the risk of cognitive impairment. One of the most widely investigated frameworks evaluated in previous studies is approach-oriented and avoidance-oriented coping strategies. The purpose of this study is to investigate the relationship between cognitive function and coping strategies among elderly residents in urban areas of Japan. Method: This is a part of the cross-sectional Arakawa geriatric cohort study for 1,099 residents (aged 65 to 86 years; mean [SD] = 72.9 [5.2]). Participants were assessed for cognitive function using the Mini-Mental State Examination (MMSE) and diagnosed by psychiatrists in face-to-face interviews. They were then investigated for their each coping behaviors and coping strategies (approach- and avoidance-oriented coping) using stress and coping inventory. A multiple regression analysis was used to investigate the relationship between MMSE score and each coping strategy. Results: Of the 1,099 patients, the mean MMSE score of the study participants was 27.2 (SD = 2.7), and the numbers of the diagnosis of normal, mild cognitive impairment (MCI), and dementia were 815 (74.2%), 248 (22.6%), and 14 (1.3%), respectively. Approach-oriented coping score was significantly associated with MMSE score (B [partial regression coefficient] = 0.12, 95% confidence interval = 0.05 to 0.19) after adjusting for confounding factors including age, sex, and education. Avoidance-oriented coping did not show a significant association with MMSE score (B [partial regression coefficient] = -0.02, 95% confidence interval = -0.09 to 0.06). Conclusion: Approach-oriented coping was clearly associated with neurocognitive function in the Japanese population. A future longitudinal trial is warranted to investigate the protective effects of coping behavior on cognitive function.

Keywords: approach-oriented coping, cognitive impairment, coping behavior, dementia

Procedia PDF Downloads 120
15229 Experimental Study on Performance of a Planar Membrane Humidifier for a Proton Exchange Membrane Fuel Cell Stack

Authors: Chen-Yu Chen, Wei-Mon Yan, Chi-Nan Lai, Jian-Hao Su

Abstract:

The proton exchange membrane fuel cell (PEMFC) becomes more important as an alternative energy source recently. Maintaining proper water content in the membrane is one of the key requirements for optimizing the PEMFC performance. The planar membrane humidifier has the advantages of simple structure, low cost, low-pressure drop, light weight, reliable performance and good gas separability. Thus, it is a common external humidifier for PEMFCs. In this work, a planar membrane humidifier for kW-scale PEMFCs is developed successfully. The heat and mass transfer of humidifier is discussed, and its performance is analyzed in term of dew point approach temperature (DPAT), water vapor transfer rate (WVTR) and water recovery ratio (WRR). The DPAT of the humidifier with the counter flow approach reaches about 6°C under inlet dry air of 50°C and 60% RH and inlet humid air of 70°C and 100% RH. The rate of pressure loss of the humidifier is 5.0×10² Pa/min at the torque of 7 N-m, which reaches the standard of commercial planar membrane humidifiers. From the tests, it is found that increasing the air flow rate increases the WVTR. However, the DPAT and the WRR are not improved by increasing the WVTR as the air flow rate is higher than the optimal value. In addition, increasing the inlet temperature or the humidity of dry air decreases the WVTR and the WRR. Nevertheless, the DPAT is improved at elevated inlet temperatures or humidities of dry air. Furthermore, the performance of the humidifier with the counter flow approach is better than that with the parallel flow approach. The DPAT difference between the two flow approaches reaches up to 8 °C.

Keywords: heat and mass transfer, humidifier performance, PEM fuel cell, planar membrane humidifier

Procedia PDF Downloads 293
15228 Investigating the Relationship Between the Auditor’s Personality Type and the Quality of Financial Reporting in Companies Listed on the Tehran Stock Exchange

Authors: Seyedmohsen Mortazavi

Abstract:

The purpose of this research is to investigate the personality types of internal auditors on the quality of financial reporting in companies admitted to the Tehran Stock Exchange. Personality type is one of the issues that emphasizes the field of auditors' behavior, and this field has attracted the attention of shareholders and stock companies today, because the auditors' personality can affect the type of financial reporting and its quality. The research is applied in terms of purpose and descriptive and correlational in terms of method, and a researcher-made questionnaire was used to check the research hypotheses. The statistical population of the research is all the auditors, accountants and financial managers of the companies admitted to the Tehran Stock Exchange, and due to their large number and the uncertainty of their exact number, 384 people have been considered as a statistical sample using Morgan's table. The researcher-made questionnaire was approved by experts in the field, and then its validity and reliability were obtained using software. For the validity of the questionnaire, confirmatory factor analysis was first examined, and then using divergent and convergent validity; Fornell-Larker and cross-sectional load test of the validity of the questionnaire were confirmed; Then, the reliability of the questionnaire was examined using Cronbach's alpha and composite reliability, and the results of these two tests showed the appropriate reliability of the questionnaire. After checking the validity and reliability of the research hypotheses, PLS software was used to check the hypotheses. The results of the research showed that the personalities of internal auditors can affect the quality of financial reporting; The personalities investigated in this research include neuroticism, extroversion, flexibility, agreeableness and conscientiousness, all of these personality types can affect the quality of financial reporting.

Keywords: flexibility, quality of financial reporting, agreeableness, conscientiousness

Procedia PDF Downloads 84
15227 Evaluation of QSRR Models by Sum of Ranking Differences Approach: A Case Study of Prediction of Chromatographic Behavior of Pesticides

Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević

Abstract:

The present study deals with the selection of the most suitable quantitative structure-retention relationship (QSRR) models which should be used in prediction of the retention behavior of basic, neutral, acidic and phenolic pesticides which belong to different classes: fungicides, herbicides, metabolites, insecticides and plant growth regulators. Sum of ranking differences (SRD) approach can give a different point of view on selection of the most consistent QSRR model. SRD approach can be applied not only for ranking of the QSRR models, but also for detection of similarity or dissimilarity among them. Applying the SRD analysis, the most similar models can be found easily. In this study, selection of the best model was carried out on the basis of the reference ranking (“golden standard”) which was defined as the row average values of logarithm of retention time (logtr) defined by high performance liquid chromatography (HPLC). Also, SRD analysis based on experimental logtr values as reference ranking revealed similar grouping of the established QSRR models already obtained by hierarchical cluster analysis (HCA).

Keywords: chemometrics, chromatography, pesticides, sum of ranking differences

Procedia PDF Downloads 362
15226 Artificial Intelligence Aided Improvement in Canada's Supply Chain Management

Authors: Mohammad Talebi

Abstract:

Supply chain administration could be a concern for all the countries within the world, whereas there's no special approach towards supportability. Generally, for one decade, manufactured insights applications in keen supply chains have found a key part. In this paper, applications of artificial intelligence in supply chain management have been clarified, and towards Canadian plans for smart supply chain management (SCM), a few notes have been suggested. A hierarchical framework for smart SCM might provide a great roadmap for decision-makers to find the most appropriate approach toward smart SCM. Within the system of decision-making, all the levels included in the accomplishment of smart SCM are included. In any case, more considerations are got to be paid to available and needed infrastructures.

Keywords: smart SCM, AI, SSCM, procurement

Procedia PDF Downloads 75
15225 A Neural Network Approach to Evaluate Supplier Efficiency in a Supply Chain

Authors: Kishore K. Pochampally

Abstract:

The success of a supply chain heavily relies on the efficiency of the suppliers involved. In this paper, we propose a neural network approach to evaluate the efficiency of a supplier, which is being considered for inclusion in a supply chain, using the available linguistic (fuzzy) data of suppliers that already exist in the supply chain. The approach is carried out in three phases, as follows: In phase one, we identify criteria for evaluation of the supplier of interest. Then, in phase two, we use performance measures of already existing suppliers to construct a neural network that gives weights (importance values) of criteria identified in phase one. Finally, in phase three, we calculate the overall rating of the supplier of interest. The following are the major findings of the research conducted for this paper: (i) linguistic (fuzzy) ratings of suppliers such as 'good', 'bad', etc., can be converted (defuzzified) to numerical ratings (1 – 10 scale) using fuzzy logic so that those ratings can be used for further quantitative analysis; (ii) it is possible to construct and train a multi-level neural network in order to determine the weights of the criteria that are used to evaluate a supplier; and (iii) Borda’s rule can be used to group the weighted ratings and calculate the overall efficiency of the supplier.

Keywords: fuzzy data, neural network, supplier, supply chain

Procedia PDF Downloads 101
15224 New Chances of Reforming Pedagogical Approach In Secondary English Class in China under the New English Curriculum and National College Entrance Examination Reform

Authors: Yue Wang

Abstract:

Five years passed since the newest English curriculum reform policy was published in China, hand-wringing spread among teachers who accused that this is another 'Wearing New Shoes to Walk the Old Road' policy. This paper provides a thoroughly philosophical policy analysis of serious efforts that had been made to support this reform and reveals the hindrances that bridled the reform to yield the desired effect. Blame could be easily put on teachers for their insufficient pedagogical content knowledge, conservative resistance, and the handicaps of large class sizes and limited teaching times, and so on. However, the underlying causes for this implementation failure are the interrelated factors in the NCEE-centred education system, such as the reluctant from students, the lack of school and education bureau support, and insufficient teacher training. A further discussion of 2017 to 2020’s NCEE reform on English prompt new possibilities for the authentic pedagogical approach reform in secondary English classes. In all, the pedagogical approach reform at the secondary level is heading towards a brighter future with the initiation of new NCEE reform.

Keywords: English curriculum, failure, NCEE, new possibilities, pedagogical, policy analysis, reform

Procedia PDF Downloads 126
15223 An Information-Based Approach for Preference Method in Multi-Attribute Decision Making

Authors: Serhat Tuzun, Tufan Demirel

Abstract:

Multi-Criteria Decision Making (MCDM) is the modelling of real-life to solve problems we encounter. It is a discipline that aids decision makers who are faced with conflicting alternatives to make an optimal decision. MCDM problems can be classified into two main categories: Multi-Attribute Decision Making (MADM) and Multi-Objective Decision Making (MODM), based on the different purposes and different data types. Although various MADM techniques were developed for the problems encountered, their methodology is limited in modelling real-life. Moreover, objective results are hard to obtain, and the findings are generally derived from subjective data. Although, new and modified techniques are developed by presenting new approaches such as fuzzy logic; comprehensive techniques, even though they are better in modelling real-life, could not find a place in real world applications for being hard to apply due to its complex structure. These constraints restrict the development of MADM. This study aims to conduct a comprehensive analysis of preference methods in MADM and propose an approach based on information. For this purpose, a detailed literature review has been conducted, current approaches with their advantages and disadvantages have been analyzed. Then, the approach has been introduced. In this approach, performance values of the criteria are calculated in two steps: first by determining the distribution of each attribute and standardizing them, then calculating the information of each attribute as informational energy.

Keywords: literature review, multi-attribute decision making, operations research, preference method, informational energy

Procedia PDF Downloads 204
15222 A Collaborative Problem Driven Approach to Design an HR Analytics Application

Authors: L. Atif, C. Rosenthal-Sabroux, M. Grundstein

Abstract:

The requirements engineering process is a crucial phase in the design of complex systems. The purpose of our research is to present a collaborative problem-driven requirements engineering approach that aims at improving the design of a Decision Support System as an Analytics application. This approach has been adopted to design a Human Resource management DSS. The Requirements Engineering process is presented as a series of guidelines for activities that must be implemented to assure that the final product satisfies end-users requirements and takes into account the limitations identified. For this, we know that a well-posed statement of the problem is “a problem whose crucial character arises from collectively produced estimation and a formulation found to be acceptable by all the parties”. Moreover, we know that DSSs were developed to help decision-makers solve their unstructured problems. So, we thus base our research off of the assumption that developing DSS, particularly for helping poorly structured or unstructured decisions, cannot be done without considering end-user decision problems, how to represent them collectively, decisions content, their meaning, and the decision-making process; thus, arise the field issues in a multidisciplinary perspective. Our approach addresses a problem-driven and collaborative approach to designing DSS technologies: It will reflect common end-user problems in the upstream design phase and in the downstream phase these problems will determine the design choices and potential technical solution. We will thus rely on a categorization of HR’s problems for a development mirroring the Analytics solution. This brings out a new data-driven DSS typology: Descriptive Analytics, Explicative or Diagnostic Analytics, Predictive Analytics, Prescriptive Analytics. In our research, identifying the problem takes place with design of the solution, so, we would have to resort a significant transformations of representations associated with the HR Analytics application to build an increasingly detailed representation of the goal to be achieved. Here, the collective cognition is reflected in the establishment of transfer functions of representations during the whole of the design process.

Keywords: DSS, collaborative design, problem-driven requirements, analytics application, HR decision making

Procedia PDF Downloads 280
15221 Shaping Traditional Chinese Culture in Contemporary Fashion: ‘Guochao’ as a Rising Aesthetic and the Case Study of the Designer Brand Angel Chen

Authors: Zhe Ginnie Wang

Abstract:

Recent cultural design studies have begun to shed light on the discussion of Western-Eastern cultural and aesthetic hybridization, especially in the field of fashion. With the unprecedented spread of cultural Chinese fashion design in the global fashion system, the under-identified ‘Guochao’ aesthetic that has emerged in the global market needs to be academically emphasized with a methodological approach looking at the Western-Eastern cultural hybridization present in fashion visualization. Through an in-depth and comprehensive investigation of a representative international-based Chinese designer, Angel Chen's fashion show 'Madam Qing', this paper provides a methodological approach on how a form of traditional culture can be effectively extracted and applied to modern design using the most effective techniques. The central approach examined in this study involves creating aesthetic revolutions by addressing Chinese cultural identity through re-creating and modernizing traditional Chinese culture in design.

Keywords: style modernization, Chinese culture, guochao, design identity, fashion show, Angel Chen

Procedia PDF Downloads 339
15220 Evaluation of the Efficiency of French Language Educational Software for Learners in Semnan Province, Iran

Authors: Alireza Hashemi

Abstract:

In recent decades, language teaching methodology has undergone significant changes due to the advent of computers and the growth of educational software. French language education has also benefited from these developments, and various software has been produced to facilitate the learning of this language. However, the question arises whether these software programs meet the educational needs of Iranian learners, particularly in Semnan Province. The aim of this study is to evaluate the efficiency and effectiveness of French language educational software for learners in Semnan Province, considering educational, cultural, and technical criteria. In this study, content analysis and performance evaluation methods were used to examine the educational software ‘Français Facile’. This software was evaluated based on criteria such as teaching methods, cultural compatibility, and technical features. To collect data, standardized questionnaires and semi-structured interviews with learners in Semnan Province were used. Additionally, the SPSS statistical software was employed for quantitative data analysis, and the thematic analysis method was used for qualitative data. The results indicated that the ‘Français Facile’ software has strengths such as providing diverse educational content and an interactive learning environment. However, some weaknesses include the lack of alignment of educational content with the learning culture of learners in Semnan Province and technical issues in software execution. Statistical data showed that 65% of learners were satisfied with the educational content, but 55% reported issues related to cultural alignment with their needs. This study indicates that to enhance the efficiency of French language educational software, there is a need to localize educational content and improve technical infrastructure. Producing locally adapted educational software can improve the quality of language learning and increase the motivation of learners in Semnan Province. This research emphasizes the importance of understanding the cultural and educational needs of learners in the development of educational software and recommends that developers of educational software pay special attention to these aspects.

Keywords: educational software, French language, Iran, learners in Semnan province

Procedia PDF Downloads 16
15219 Integrating Practice-Based Learning in Accounting Education: Bolstering Students Engagement and Learning

Authors: Humayun Murshed, Shibly Abdullah

Abstract:

This paper focuses on sharing experience gained through a pilot project undertaken to teach an introductory accounting subject linking real-life ground realities with the fundamental concepts of accounting. In view of the practical dimensions of Accounting it has been observed that adopting a teaching approach based on practical illustrations help students to motivate and generate interests to take accounting profession as their career. The paper reports that students’ perception about accounting as ‘dreary’ has been changed to ‘interesting’ due to adoption of practice based approach in teaching. The authors argue that ‘concept mapping’ can play a vital role in facilitating practice based education in accounting which promotes a rewarding learning experience among the students. The paper considers taking into account generic skills development, student centric learning, development of innovative assessment tasks, making students aware of the potential benefits of practice based education primarily through concept mapping, and engaging them both inside and outside of the class rooms are critical for ensuring success of this approach.

Keywords: accounting education, pedagogy, practice-based education, concept mapping

Procedia PDF Downloads 328
15218 Sweden’s SARS-CoV-2 Mitigation Failure as a Science and Solutions Principle Case Study

Authors: Dany I. Doughan, Nizam S. Najd

Abstract:

Different governments in today’s global pandemic are approaching the challenging and complex issue of mitigating the spread of the SARS-CoV-2 virus differently while simultaneously considering their national economic and operational bottom lines. One of the most notable successes has been Taiwan's multifaceted virus containment approach, which resulted in a substantially lower incidence rate compared to Sweden’s chief mitigation tactic of herd immunity. From a classic Swiss Cheese Model perspective, integrating more fail-safe layers of defense against the virus in Taiwan’s approach compared to Sweden’s meant that in Taiwan, the government did not have to resort to extreme measures like the national lockdown Sweden is currently contemplating. From an optimized virus spread mitigation solution development standpoint using the Solutions Principle, the Taiwanese and Swedish solutions were desirable economically by businesses that remained open and non-economically or socially by individuals who enjoyed fewer disruptions from what they considered normal before the pandemic. Out of the two, the Taiwanese approach was more feasible long-term from a workforce management and quality control perspective for healthcare facilities and their professionals who were able to provide better, longer, more attentive care to the fewer new positive COVID-19 cases. Furthermore, the Taiwanese approach was more applicable as an overall model to emulate thanks in part to its short-term and long-term multilayered approach, which allows for the kind of flexibility needed by other governments to fully or partially adapt or adopt said, model. The Swedish approach, on the other hand, ignored the biochemical nature of the virus and relied heavily on short-term personal behavioral adjustments and conduct modifications, which are not as reliable as establishing required societal norms and awareness programs. The available international data on COVID-19 cases and the published governmental approaches to control the spread of the coronavirus support a better fit into the Solutions Principle of Taiwan’s Swiss Cheese Model success story compared to Sweden’s.

Keywords: coronavirus containment and mitigation, solutions principle, Swiss Cheese Model, viral mutation

Procedia PDF Downloads 117
15217 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.

Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction

Procedia PDF Downloads 75
15216 The Right to Development as Constitutive and Prescriptive Right: The Lower Omo Valley Case of Ethiopia

Authors: Kebene K. Wodajo

Abstract:

The right to development (RTD) has gone through different phases of metamorphoses, from the right to economic growth to full human development. Despite the fact that Africa has taken the lead in articulating and recognizing the RTD in a binding multilateral human rights treaty, realization of the right poses a challenge at the operational level. The challenge is worse in Sub-Saharan Africa, mainly because governments often tend to set economic growth as their ultimate goal, with very little consideration to the local peoples’ welfare in their territory. Ethiopia is not an exception to this. While recording a fast economic growth, yet this has been accompanied by increasing severity of multidimensional poverty. This paper explores the place of the ‘people’ in the development trajectory Ethiopia is pursuing and if and how a right-based approach to development could be brought to practice beyond the rhetoric. By inquiring into the place of the ‘people’, the paper attempts to show whether the people are at the center or at the periphery, beneficiary or victims of the ongoing development. In doing so, it divulges the gulf between the rhetoric and the reality of development practice. By asking/discussing if and how a right-based approach to development could bridge the gap, the paper shows how this approach could translate ‘people’s’ need into right, and recognize them as active subjects and stakeholders of the process of development. As an instance of showing the gap, the paper takes the Lower Omo valley sugar plantation project as a case in point. Through analysis the paper demonstrates that the development trajectory being followed by Ethiopia falls short of fitting into the human development discourse of UN Declaration on the Right to Development (DRD), the African Charter on People and Human Rights (the Charter) and the Ethiopian constitution. The paper argues that Ethiopia’s development efforts must take account of both the constitutive and prescriptive nature of the RTD if social equity is to be met.

Keywords: development, Ethiopia, lower Omo valley, right-based approach, right to development, people, people’s right

Procedia PDF Downloads 305
15215 Geometric Imperfections in Lattice Structures: A Simulation Strategy to Predict Strength Variability

Authors: Xavier Lorang, Ahmadali Tahmasebimoradi, Chetra Mang, Sylvain Girard

Abstract:

The additive manufacturing processes (e.g. selective laser melting) allow us to produce lattice structures which have less weight, higher impact absorption capacity, and better thermal exchange property compared to the classical structures. Unfortunately, geometric imperfections (defects) in the lattice structures are by-products results of the manufacturing process. These imperfections decrease the lifetime and the strength of the lattice structures and alternate their mechanical responses. The objective of the paper is to present a simulation strategy which allows us to take into account the effect of the geometric imperfections on the mechanical response of the lattice structure. In the first part, an identification method of geometric imperfection parameters of the lattice structure based on point clouds is presented. These point clouds are based on tomography measurements. The point clouds are fed into the platform LATANA (LATtice ANAlysis) developed by IRT-SystemX to characterize the geometric imperfections. This is done by projecting the point clouds of each microbeam along the beam axis onto a 2D surface. Then, by fitting an ellipse to the 2D projections of the points, the geometric imperfections are characterized by introducing three parameters of an ellipse; semi-major/minor axes and angle of rotation. With regard to the calculated parameters of the microbeam geometric imperfections, a statistical analysis is carried out to determine a probability density law based on a statistical hypothesis. The microbeam samples are randomly drawn from the density law and are used to generate lattice structures. In the second part, a finite element model for the lattice structure with the simplified geometric imperfections (ellipse parameters) is presented. This numerical model is used to simulate the generated lattice structures. The propagation of the uncertainties of geometric imperfections is shown through the distribution of the computed mechanical responses of the lattice structures.

Keywords: additive manufacturing, finite element model, geometric imperfections, lattice structures, propagation of uncertainty

Procedia PDF Downloads 170
15214 An Optimized Approach to Generate the Possible States of Football Tournaments Final Table

Authors: Mouslem Damkhi

Abstract:

This paper focuses on possible states of a football tournament final table according to the number of participating teams. Each team holds a position in the table with which it is possible to determine the highest and lowest points for that team. This paper proposes an optimized search space based on the minimum and maximum number of points which can be gained by each team to produce and enumerate the possible states for a football tournament final table. The proposed search space minimizes producing the invalid states which cannot occur during a football tournament. The generated states are filtered by a validity checking algorithm which seeks to reach a tournament graph based on a generated state. Thus, the algorithm provides a way to determine which team’s wins, draws and loses values guarantee a particular table position. The paper also presents and discusses the experimental results of the approach on the tournaments with up to eight teams. Comparing with a blind search algorithm, our proposed approach reduces generating the invalid states up to 99.99%, which results in a considerable optimization in term of the execution time.

Keywords: combinatorics, enumeration, graph, tournament

Procedia PDF Downloads 106
15213 Mechanical and Physical Properties of Various Types of Dental Floss

Authors: Supanitayanon Lalita, Dechkunakorn Surachai, Anuwongnukroh Niwat, Srikhirin Toemsak, Roongrujimek Pitchaya, Tua-Ngam Peerapong

Abstract:

Objective: To compare maximum load, percentage of elongation, physical characteristics of 4 types of dental floss: (1) Thai Silk Floss (silk, waxed), (2) Oral B® Essential Floss (nylon, waxed), (3) Experimental Floss Xu (nylon, unwaxed), (4) Experimental Floss Xw (nylon, waxed). Materials & method: Four types of floss were tested (n=30) with a Universal Testing Machine (Instron®). Each sample (30 cm long, 5 cm segment) was fixed, and pulled apart with load cell of 100 N and a test speed of 100 mm/min. Physical characteristics were investigated by digital microscope under 2.5×10 magnification, and scanning electron microscope under 1×100 and 5×100 magnification. The size of the filaments was measured in micron (μm) and the fineness were measured in Denier. Statistical analysis: For mechanical properties, the maximum load and the percentage of elongation were presented as mean ± SD. The distribution of the data was calculated by the Kolmogorov-Smirnov test. One-way ANOVA and multiple comparison (Tukey HSD) were used to analyze the differences among the groups with the level of a statistical difference at p < 0.05. Results: The maximum load of Floss Xu, Floss Xw, Oral B and Thai Silk were 47.39, 46.46, 25.38, and 23.70 N, respectively. The percentage of elongation of Oral B, Floss Xw, Floss Xu and Thai Silk were 72.43, 44.62, 31.25, and 16.44%, respectively. All 4 types of dental floss showed statistically differences in both the maximum load and percentage of elongation at p < 0.05, except for maximum load between Floss Xw and Floss Xu that showed no statistically significant difference. Physical characteristics of Thai silk revealed the most disintegrated, the smallest, and the least fine filaments. Conclusion: Floss Xu had the highest maximum load. Oral B had the highest percentage of elongation. Wax coating on Floss X increased the elongation but had no significant effect on the maximum load. The physical characteristics of Thai Silk resulted in the lowest mechanical properties values.

Keywords: dental floss, maximum load, mechanical property, percentage of elongation, physical property

Procedia PDF Downloads 257
15212 Efficient Filtering of Graph Based Data Using Graph Partitioning

Authors: Nileshkumar Vaishnav, Aditya Tatu

Abstract:

An algebraic framework for processing graph signals axiomatically designates the graph adjacency matrix as the shift operator. In this setup, we often encounter a problem wherein we know the filtered output and the filter coefficients, and need to find out the input graph signal. Solution to this problem using direct approach requires O(N3) operations, where N is the number of vertices in graph. In this paper, we adapt the spectral graph partitioning method for partitioning of graphs and use it to reduce the computational cost of the filtering problem. We use the example of denoising of the temperature data to illustrate the efficacy of the approach.

Keywords: graph signal processing, graph partitioning, inverse filtering on graphs, algebraic signal processing

Procedia PDF Downloads 296
15211 Reliability Analysis of Construction Schedule Plan Based on Building Information Modelling

Authors: Lu Ren, You-Liang Fang, Yan-Gang Zhao

Abstract:

In recent years, the application of BIM (Building Information Modelling) to construction schedule plan has been the focus of more and more researchers. In order to assess the reasonable level of the BIM-based construction schedule plan, that is whether the schedule can be completed on time, some researchers have introduced reliability theory to evaluate. In the process of evaluation, the uncertain factors affecting the construction schedule plan are regarded as random variables, and probability distributions of the random variables are assumed to be normal distribution, which is determined using two parameters evaluated from the mean and standard deviation of statistical data. However, in practical engineering, most of the uncertain influence factors are not normal random variables. So the evaluation results of the construction schedule plan will be unreasonable under the assumption that probability distributions of random variables submitted to the normal distribution. Therefore, in order to get a more reasonable evaluation result, it is necessary to describe the distribution of random variables more comprehensively. For this purpose, cubic normal distribution is introduced in this paper to describe the distribution of arbitrary random variables, which is determined by the first four moments (mean, standard deviation, skewness and kurtosis). In this paper, building the BIM model firstly according to the design messages of the structure and making the construction schedule plan based on BIM, then the cubic normal distribution is used to describe the distribution of the random variables due to the collecting statistical data of the random factors influencing construction schedule plan. Next the reliability analysis of the construction schedule plan based on BIM can be carried out more reasonably. Finally, the more accurate evaluation results can be given providing reference for the implementation of the actual construction schedule plan. In the last part of this paper, the more efficiency and accuracy of the proposed methodology for the reliability analysis of the construction schedule plan based on BIM are conducted through practical engineering case.

Keywords: BIM, construction schedule plan, cubic normal distribution, reliability analysis

Procedia PDF Downloads 125
15210 A Qualitative Study Examining the Process of EFL Course Design from the Perspectives of Teachers

Authors: Iman Al Khalidi

Abstract:

Recently, English has become the language of globalization and technology. In turn, this has resulted in a seemingly bewildering array of influences and trends in the domain of TESOL curriculum. In light of these changes, higher education has to provide a new and more powerful kind of education. It should prepare students to be more engaged citizens, more capable to solve complex problems at work, and well prepared to lead meaningful life. In response to this, universities, colleges, schools, and departments have to work out in light of the requirements and challenges of the global and technological era. Consequently they have to focus on the adoption of contemporary curriculum which goes in line with the pedagogical shifts from teaching –centered approach to learning centered approach. Ideally, there has been noticeable emphasis on the crucial importance of developing and professionalizing teachers in order to engage them in the process of curriculum development and action research. This is a qualitative study that aims at understanding and exploring the process of designing EFL courses by teachers at the tertiary level from the perspectives of the participants in a professional context in TESOL, Department of English, a private college in Oman. It is a case study that stands on the philosophy of the qualitative approach. It employs multi methods for collecting qualitative data: semi-structured interviews with teachers, focus group discussions with students, and document analysis. The collected data have been analyzed qualitatively by adopting Miles and Huberman's Approach using procedures of reduction, coding, displaying and conclusion drawing and verification.

Keywords: course design, components of course design, case study, data analysis

Procedia PDF Downloads 525
15209 A Qualitative Study Examining the Process of Course Design from the Perspectives of Teachers

Authors: Iman Al Khalidi

Abstract:

Recently, English has become the language of globalization and technology. In turn, this has resulted in a seemingly bewildering array of influences and trends in the domain of TESOL curriculum. In light of these changes, higher education has to provide a new and more powerful kind of education. It should prepare students to be more engaged citizens, more capable to solve complex problems at work, and well prepared to lead a meaningful life. In response to this, universities, colleges, schools, and departments have to work out in light of the requirements and challenges of the global and technological era. Consequently, they have to focus on the adoption of contemporary curriculum which goes in line with the pedagogical shifts from teaching –centered approach to learning centered approach. Ideally, there has been noticeable emphasis on the crucial importance of developing and professionalizing teachers in order to engage them in the process of curriculum development and action research. This is a qualitative study that aims at understanding and exploring the process of designing EFL courses by teachers at the tertiary level from the perspectives of the participants in a professional context in TESOL, Department of English, a private college in Oman. It is a case study that stands on the philosophy of the qualitative approach. It employs multi-methods for collecting qualitative data: semi-structured interviews with teachers, focus group discussions with students, and document analysis. The collected data have been analyzed qualitatively by adopting Miles and Huberman's Approach using procedures of reduction, coding, displaying, and conclusion drawing and verification.

Keywords: course design, components of course design, case study, data analysis

Procedia PDF Downloads 430
15208 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 227