Search results for: constructivist theoretical approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16039

Search results for: constructivist theoretical approach

10519 Contextualizing Communication through Culture and Social Structure: An Exploration of Media Life

Authors: Jyoti Ranjan Sahoo

Abstract:

Communication is a social phenomenon which mediates to our everyday life and it creates, maintains, builds, circulates, and propagates for a common identity the society. The symbolic forms of communication such as aural, sounds, oral expressions, signs, and language as means of communication are being used in everyday life in helping to identify as construction of social reality. These symbolic forms of communication are treated as the social process in everyday life. Therefore, there is an intrinsic relationship between communication and culture to understand media life for village communities. Similarly, the interface of communication with social life is reflected upon it’s formulation of the notions of social structure and culture. It has been observed that there is an overlapping and new phenomenonal change of media life among marginalized communities in general and village communities in particular. Therefore, this paper is an outcome of decadal stock of literature and an empirical investigation on understanding of communication in a tribal village in India. It has examined the idea of American scientist Edward T. Hall “the culture is communication, and the communication is culture” in village society on understanding media life. Thus, the Harold Innis’s theoretical idea of “communication” has been critically examined in these contexts since author tries to explore and understand the inter-disciplinarity on understanding media life through communication and culture which is embedded in socio-cultural life bearing on epistemological and ontological implications. The paper tries to explore and understand the inter-disciplinary and historical trajectories of communication embedded with other social science disciplines; and also tries to map these studies relevant for the future directions and engagement which would have bearing on epistemological and ontological implications in the field of media and communication.

Keywords: culture, communication, history, media, oral, tradition

Procedia PDF Downloads 347
10518 FRATSAN: A New Software for Fractal Analysis of Signals

Authors: Hamidreza Namazi

Abstract:

Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign fractal characteristics to a dataset which may be a theoretical dataset or a pattern or signal extracted from phenomena including natural geometric objects, sound, market fluctuations, heart rates, digital images, molecular motion, networks, etc. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. For this purpose a Visual C++ based software called FRATSAN (FRActal Time Series ANalyser) was developed which extract information from signals through three measures. These measures are Fractal Dimensions, Jeffrey’s Measure and Hurst Exponent. After computing these measures, the software plots the graphs for each measure. Besides computing three measures the software can classify whether the signal is fractal or no. In fact, the software uses a dynamic method of analysis for all the measures. A sliding window is selected with a value equal to 10% of the total number of data entries. This sliding window is moved one data entry at a time to obtain all the measures. This makes the computation very sensitive to slight changes in data, thereby giving the user an acute analysis of the data. In order to test the performance of this software a set of EEG signals was given as input and the results were computed and plotted. This software is useful not only for fundamental fractal analysis of signals but can be used for other purposes. For instance by analyzing the Hurst exponent plot of a given EEG signal in patients with epilepsy the onset of seizure can be predicted by noticing the sudden changes in the plot.

Keywords: EEG signals, fractal analysis, fractal dimension, hurst exponent, Jeffrey’s measure

Procedia PDF Downloads 452
10517 Inconsistent Safety Leadership as a Predictor of Employee Safety Behavior

Authors: Jane Mullen, Ann Rheaume, Kevin Kelloway

Abstract:

Research on the effects of inconsistent safety leadership is limited, particularly regarding employee safety behavior in organizations. Inconsistent safety leadership occurs when organizational leaders display both effective and ineffective styles of safety leadership (i.e., transformational vs laissez-faire). In this study, we examine the effect of inconsistent safety leadership style on employee safety participation. Defined as the interaction of S.A.F.E.R (Speak, Act, Focus, Engage and Recognize) leadership style and passive leadership style, inconsistent safety leadership was found to be a significant predictor of safety participation in a sample of 307 nurses in Eastern Canada. Results of the moderated regression analysis also showed a significant main effect for S.A.F.E.R leadership, but not for passive leadership. To further explore the significant interaction, the simple slopes for S.A.F.E.R leadership at high and low levels (1 SD above and below the mean) of passive leadership were plotted. As predicted, the positive effects of S.A.F.E.R leadership behavior were attenuated when leaders were perceived by employees as also displaying high levels of passive leadership (i.e., inconsistent leadership styles). The research makes important theoretical and practical contributions to the occupational health and safety literature. The results demonstrate that leadership behavior, which is characteristic of the S.A.F.E.R model, is positively associated with employee safety participation. This finding is particularly important as researchers continue to explore what leaders can do to engage employees in work-related safety activities. The results also demonstrate how passive leadership may undermine the positive outcomes associated with safety leadership behavior in organizations. The data suggest that employee safety behavior is highest when leaders engage in safety effective leadership behavior on a consistent basis, rather than periodically.

Keywords: employee safety behavior, leadership, participation, safety training

Procedia PDF Downloads 347
10516 Geostatistical Models to Correct Salinity of Soils from Landsat Satellite Sensor: Application to the Oran Region, Algeria

Authors: Dehni Abdellatif, Lounis Mourad

Abstract:

The new approach of applied spatial geostatistics in materials sciences, agriculture accuracy, agricultural statistics, permitted an apprehension of managing and monitoring the water and groundwater qualities in a relationship with salt-affected soil. The anterior experiences concerning data acquisition, spatial-preparation studies on optical and multispectral data has facilitated the integration of correction models of electrical conductivity related with soils temperature (horizons of soils). For tomography apprehension, this physical parameter has been extracted from calibration of the thermal band (LANDSAT ETM+6) with a radiometric correction. Our study area is Oran region (Northern West of Algeria). Different spectral indices are determined such as salinity and sodicity index, the Combined Spectral Reflectance Index (CSRI), Normalized Difference Vegetation Index (NDVI), emissivity, Albedo, and Sodium Adsorption Ratio (SAR). The approach of geostatistical modeling of electrical conductivity (salinity), appears to be a useful decision support system for estimating corrected electrical resistivity related to the temperature of surface soils, according to the conversion models by substitution, the reference temperature at 25°C (where hydrochemical data are collected with this constraint). The Brightness temperatures extracted from satellite reflectance (LANDSAT ETM+) are used in consistency models to estimate electrical resistivity. The confusions that arise from the effects of salt stress and water stress removed followed by seasonal application of the geostatistical analysis in Geographic Information System (GIS) techniques investigation and monitoring the variation of the electrical conductivity in the alluvial aquifer of Es-Sénia for the salt-affected soil.

Keywords: geostatistical modelling, landsat, brightness temperature, conductivity

Procedia PDF Downloads 428
10515 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation

Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell

Abstract:

Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.

Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models

Procedia PDF Downloads 132
10514 Fire Protection Performance of Different Industrial Intumescent Coatings for Steel Beams

Authors: Serkan Kocapinar, Gülay Altay

Abstract:

This study investigates the efficiency of two different industrial intumescent coatings which have different types of certifications, in the fire protection performance in steel beams in the case of ISO 834 fire for 2 hours. A better understanding of industrial intumescent coatings, which assure structural integrity and prevent a collapse of steel structures, is needed to minimize the fire risks in steel structures. A comparison and understanding of different fire protective intumescent coatings, which are Product A and Product B, are used as a thermal barrier between the steel components and the fire. Product A is tested according to EN 13381-8 and BS 476-20,22 and is certificated by ISO Standards. Product B is tested according to EN 13381-8 and ASTM UL-94 and is certificated by the Turkish Standards Institute (TSE). Generally, fire tests to evaluate the fire performance of steel components are done numerically with commercial software instead of experiments due to the high cost of an ISO 834 fire test in a furnace. Hence, there is a gap in the literature about the comparisons of different certificated intumescent coatings for fire protection in the case of ISO 834 fire in a furnace experiment for 2 hours. The experiment was carried out by using two 1-meter UPN 200 steel sections. Each one was coated by different industrial intumescent coatings. A furnace was used by the Turkish Standards Institute (TSE) for the experiment. The temperature of the protected steels and the inside of the furnace was measured with the help of 24 thermocouples which were applied before the intumescent coatings during the two hours for the performance of intumescent coatings by getting a temperature-time curve of steel components. FIN EC software was used to determine the critical temperatures of protected steels, and Abaqus was used for thermal analysis to get theoretical results to compare with the experimental results.

Keywords: fire safety, structural steel, ABAQUS, thermal analysis, FIN EC, intumescent coatings

Procedia PDF Downloads 87
10513 Developing Dynamic Capabilities: The Case of Western Subsidiaries in Emerging Market

Authors: O. A. Adeyemi, M. O. Idris, W. A. Oke, O. T. Olorode, S. O. Alayande, A. E. Adeoye

Abstract:

The purpose of this paper is to investigate the process of capability building at subsidiary level and the challenges to such process. The relevance of external factors for capability development, have not been explicitly addressed in empirical studies. Though, internal factors, acting as enablers, have been more extensively studied. With reference to external factors, subsidiaries are actively influenced by specific characteristics of the host country, implying a need to become fully immersed in local culture and practices. Specifically, in MNCs, there has been a widespread trend in management practice to increase subsidiary autonomy,  with subsidiary managers being encouraged to act entrepreneurially, and to take advantage of host country specificity. As such, it could be proposed that: P1: The degree at which subsidiary management is connected to the host country, will positively influence the capability development process. Dynamic capabilities reside to a large measure with the subsidiary management team, but are impacted by the organizational processes, systems and structures that the MNC headquarter has designed to manage its business. At the subsidiary level, the weight of the subsidiary in the network, its initiative-taking and its profile building increase the supportive attention of the HQs and are relevant to the success of the process of capability building. Therefore, our second proposition is that: P2: Subsidiary role and HQ support are relevant elements in capability development at the subsidiary level. Design/Methodology/Approach: This present study will adopt the multiple case studies approach. That is because a case study research is relevant when addressing issues without known empirical evidences or with little developed prior theory. The key definitions and literature sources directly connected with operations of western subsidiaries in emerging markets, such as China, are well established. A qualitative approach, i.e., case studies of three western subsidiaries, will be adopted. The companies have similar products, they have operations in China, and both of them are mature in their internationalization process. Interviews with key informants, annual reports, press releases, media materials, presentation material to customers and stakeholders, and other company documents will be used as data sources. Findings: Western Subsidiaries in Emerging Market operate in a way substantially different from those in the West. What are the conditions initiating the outsourcing of operations? The paper will discuss and present two relevant propositions guiding that process. Practical Implications: MNCs headquarter should be aware of the potential for capability development at the subsidiary level. This increased awareness could induce consideration in headquarter about the possible ways of encouraging such known capability development and how to leverage these capabilities for better MNC headquarter and/or subsidiary performance. Originality/Value: The paper is expected to contribute on the theme: drivers of subsidiary performance with focus on emerging market. In particular, it will show how some external conditions could promote a capability-building process within subsidiaries.

Keywords: case studies, dynamic capability, emerging market, subsidiary

Procedia PDF Downloads 111
10512 Offline High Voltage Diagnostic Test Findings on 15MVA Generator of Basochhu Hydropower Plant

Authors: Suprit Pradhan, Tshering Yangzom

Abstract:

Even with availability of the modern day online insulation diagnostic technologies like partial discharge monitoring, the measurements like Dissipation Factor (tanδ), DC High Voltage Insulation Currents, Polarization Index (PI) and Insulation Resistance Measurements are still widely used as a diagnostic tools to assess the condition of stator insulation in hydro power plants. To evaluate the condition of stator winding insulation in one of the generators that have been operated since 1999, diagnostic tests were performed on the stator bars of 15 MVA generators of Basochhu Hydropower Plant. This paper presents diagnostic study done on the data gathered from the measurements which were performed in 2015 and 2016 as part of regular maintenance as since its commissioning no proper aging data were maintained. Measurement results of Dissipation Factor, DC High Potential tests and Polarization Index are discussed with regard to their effectiveness in assessing the ageing condition of the stator insulation. After a brief review of the theoretical background, the strengths of each diagnostic method in detecting symptoms of insulation deterioration are identified. The interesting results observed from Basochhu Hydropower Plant is taken into consideration to conclude that Polarization Index and DC High Voltage Insulation current measurements are best suited for the detection of humidity and contamination problems and Dissipation Factor measurement is a robust indicator of long-term ageing caused by oxidative degradation.

Keywords: dissipation Factor (tanδ), polarization Index (PI), DC High Voltage Insulation Current, insulation resistance (IR), Tan Delta Tip-Up, dielectric absorption ratio

Procedia PDF Downloads 296
10511 The Log S-fbm Nested Factor Model

Authors: Othmane Zarhali, Cécilia Aubrun, Emmanuel Bacry, Jean-Philippe Bouchaud, Jean-François Muzy

Abstract:

The Nested factor model was introduced by Bouchaud and al., where the asset return fluctuations are explained by common factors representing the market economic sectors and residuals (noises) sharing with the factors a common dominant volatility mode in addition to the idiosyncratic mode proper to each residual. This construction infers that the factors-residuals log volatilities are correlated. Here, we consider the case of a single factor where the only dominant common mode is a S-fbm process (introduced by Peng, Bacry and Muzy) with Hurst exponent H around 0.11 and the residuals having in addition to the previous common mode idiosyncratic components with Hurst exponents H around 0. The reason for considering this configuration is twofold: preserve the Nested factor model’s characteristics introduced by Bouchaud and al. and propose a framework through which the stylized fact reported by Peng and al. is reproduced, where it has been observed that the Hurst exponents of stock indices are large as compared to those of individual stocks. In this work, we show that the Log S-fbm Nested factor model’s construction leads to a Hurst exponent of single stocks being the ones of the idiosyncratic volatility modes and the Hurst exponent of the index being the one of the common volatility modes. Furthermore, we propose a statistical procedure to estimate the Hurst factor exponent from the stock returns dynamics together with theoretical guarantees, with good results in the limit where the number of stocks N goes to infinity. Last but not least, we show that the factor can be seen as an index constructed from the single stocks weighted by specific coefficients.

Keywords: hurst exponent, log S-fbm model, nested factor model, small intermittency approximation

Procedia PDF Downloads 27
10510 Virtual Learning during the Period of COVID-19 Pandemic at a Saudi University

Authors: Ahmed Mohammed Omer Alghamdi

Abstract:

Since the COVID-19 pandemic started, a rapid, unexpected transition from face-to-face to virtual classroom (VC) teaching has involved several challenges and obstacles. However, there are also opportunities and thoughts that need to be examined and discussed. In addition, the entire world is witnessing that the teaching system and, more particularly, higher education institutes have been interrupted. To maintain the learning and teaching practices as usual, countries were forced to transition from traditional to virtual classes using various technology-based devices. In this regard, the Kingdom of Saudi Arabia (KSA) is no exception. Focusing on how the current situation has forced many higher education institutes to change to virtual classes may possibly provide a clear insight into adopted practices and implications. The main purpose of this study, therefore, was to investigate how both Saudi English as a foreign language (EFL) teachers and students perceived the implementation of virtual classes as a key factor for useful language teaching and learning process during the COVID-19 pandemic period at a Saudi university. The impetus for the research was, therefore, the need to find ways of identifying the deficiencies in this application and to suggest possible solutions that might rectify those deficiencies. This study seeks to answer the following overarching research question: “How do Saudi EFL instructors and students perceive the use of virtual classes during the COVID-19 pandemic period in their language teaching and learning context?” The following sub-questions are also used to guide the design of the study to answer the main research question: (1) To what extent are virtual classes important intra-pandemic from Saudi EFL instructors’ and students’ perspectives? (2) How effective are virtual classes for fostering English language students’ achievement? (3) What are the challenges and obstacles that instructors and students may face during the implementation of virtual teaching? A mixed method approach was employed in this study; the questionnaire data collection represented the quantitative method approach for this study, whereas the transcripts of recorded interviews represented the qualitative method approach. The participants included EFL teachers (N = 4) and male and female EFL students (N = 36). Based on the findings of this study, various aspects from teachers' and students’ perspectives were examined to determine the use of the virtual classroom applications in terms of fulfilling the students’ English language learning needs. The major findings of the study revealed that the virtual classroom applications during the current pandemic situation encountered three major challenges, among which the existence of the following essential aspects, namely lack of technology and an internet connection, having a large number of students in a virtual classroom and lack of students’ and teachers’ interactions during the virtual classroom applications. Finally, the findings indicated that although Saudi EFL students and teachers view the virtual classrooms in a positive light during the pandemic period, they reported that for long and post-pandemic period, they preferred the traditional face-to-face teaching procedure.

Keywords: virtual classes, English as a foreign language, COVID-19, Internet, pandemic

Procedia PDF Downloads 76
10509 A Systematic Review of Patient-Reported Outcomes and Return to Work after Surgical vs. Non-surgical Midshaft Humerus Fracture

Authors: Jamal Alasiri, Naif Hakeem, Saoud Almaslmani

Abstract:

Background: Patients with humeral shaft fractures have two different treatment options. Surgical therapy has lesser risks of non-union, mal-union, and re-intervention than non-surgical therapy. These positive clinical outcomes of the surgical approach make it a preferable treatment option despite the risks of radial nerve palsy and additional surgery-related risk. We aimed to evaluate patients’ outcomes and return to work after surgical vs. non-surgical management of shaft humeral fracture. Methods: We used databases, including PubMed, Medline, and Cochrane Register of Controlled Trials, from 2010 to January 2022 to search for potential randomised controlled trials (RCTs) and cohort studies comparing the patients’ related outcome measures and return to work between surgical and non-surgical management of humerus fracture. Results: After carefully evaluating 1352 articles, we included three RCTs (232 patients) and one cohort study (39 patients). The surgical intervention used plate/nail fixation, while the non-surgical intervention used a splint or brace procedure to manage shaft humeral fracture. The pooled DASH effects of all three RCTs at six (M.D: -7.5 [-13.20, -1.89], P: 0.009) I2:44%) and 12 months (M.D: -1.32 [-3.82, 1.17], p:0.29, I2: 0%) were higher in patients treated surgically than in non-surgical procedures. The pooled constant Murley score at six (M.D: 7.945[2.77,13.10], P: 0.003) I2: 0%) and 12 months (M.D: 1.78 [-1.52, 5.09], P: 0.29, I2: 0%) were higher in patients who received non-surgical than surgical therapy. However, pooled analysis for patients returning to work for both groups remained inconclusive. Conclusion: Altogether, we found no significant evidence supporting the clinical benefits of surgical over non-surgical therapy. Thus, the non-surgical approach remains the preferred therapeutic choice for managing shaft humeral fractures due to its lesser side effects.

Keywords: shaft humeral fracture, surgical treatment, Patient-related outcomes, return to work, DASH

Procedia PDF Downloads 87
10508 Learning Communities and Collaborative Reflection for Teaching Improvement

Authors: Mariana Paz Sajon, Paula Cecilia Primogerio, Mariana Albarracin

Abstract:

This study recovers an experience of teacher training carried out in an Undergraduate Business School from a private university in Buenos Aires, Argentina. The purpose of the project was to provide teachers with an opportunity to reflect on their teaching practices at the university. The aim of the study is to systematize lessons and challenges that emerge from this teacher training experience. A group of teachers who showed a willingness to learn teaching abilities was selected to work. They completed a formative journey working in learning communities starting from the immersion in different aspects of teaching and learning, class observations, and an individual and collaborative reflection exercise in a systematic way among colleagues. In this study, the productions of the eight teachers who are members of the learning communities are analyzed, framed in an e-portfolio that they prepared during the training journey. The analysis shows that after the process of shared reflection, traits related to powerful teaching and meaningful learning have appeared in the classes. For their part, teachers reflect having reached an awareness of their own practices, identifying strengths and opportunities for improvement, and the experience of sharing their own way and knowing the successes and failures of others was valued. It is an educational journey of pedagogical transformation of the teachers, which is infrequent in business education, which could lead to a change in teaching practices for the entire Business School. The present study involves theoretical and pedagogic aspects of education in a business school in Argentina and its flow-on implications for the workplace that may be transferred to other educational contexts.

Keywords: Argentina, learning community, meaningful learning, powerful teaching, reflective practice

Procedia PDF Downloads 201
10507 Right Solution of Geodesic Equation in Schwarzschild Metric and Overall Examination of Physical Laws

Authors: Kwan U. Kim, Jin Sim, Ryong Jin Jang, Sung Duk Kim

Abstract:

108 years have passed since a great number of physicists explained astronomical and physical phenomena by solving geodesic equations in Schwarzschild metric. However, when solving the geodesic equations in Schwarzschild metric, they did not correctly solve one branch of the component of space among spatial and temporal components of four-dimensional force and did not come up with physical laws correctly by means of physical analysis from the results obtained by solving the geodesic equations. In addition to it, they did not treat the astronomical and physical phenomena in a physical way based on the correct physical laws obtained from the solution of the geodesic equations in Schwarzschild metric. Therefore, some former scholars mentioned that Einstein’s theoretical basis of the general theory of relativity was obscure and incorrect, but they have not given a correct physical solution to the problems. Furthermore, since the general theory of relativity has not given a quantitative solution to obscure and incorrect problems, the generalization of gravitational theory has not been successfully completed yet, although the former scholars thought of it and tried to do it. In order to solve the problems it is necessary to explore the obscure and incorrect problems in general theory of relativity based on the physical laws and to find out the methodology of solving the problems. Therefore, first of all, as the first step for achieving the purpose, the right solution of the geodesic equation in Schwarzschild metric has been presented. Next, the correct physical laws found by making a physical analysis of the results have been presented, the obscure and incorrect problems have been shown, and an analysis of them has been made based on the physical laws. In addition, the experimental verification of the physical laws found by us has been made.

Keywords: equivalence principle, general relativity, geometrodynamics, Schwarzschild, Poincaré

Procedia PDF Downloads 57
10506 Generating Synthetic Chest X-ray Images for Improved COVID-19 Detection Using Generative Adversarial Networks

Authors: Muneeb Ullah, Daishihan, Xiadong Young

Abstract:

Deep learning plays a crucial role in identifying COVID-19 and preventing its spread. To improve the accuracy of COVID-19 diagnoses, it is important to have access to a sufficient number of training images of CXRs (chest X-rays) depicting the disease. However, there is currently a shortage of such images. To address this issue, this paper introduces COVID-19 GAN, a model that uses generative adversarial networks (GANs) to generate realistic CXR images of COVID-19, which can be used to train identification models. Initially, a generator model is created that uses digressive channels to generate images of CXR scans for COVID-19. To differentiate between real and fake disease images, an efficient discriminator is developed by combining the dense connectivity strategy and instance normalization. This approach makes use of their feature extraction capabilities on CXR hazy areas. Lastly, the deep regret gradient penalty technique is utilized to ensure stable training of the model. With the use of 4,062 grape leaf disease images, the Leaf GAN model successfully produces 8,124 COVID-19 CXR images. The COVID-19 GAN model produces COVID-19 CXR images that outperform DCGAN and WGAN in terms of the Fréchet inception distance. Experimental findings suggest that the COVID-19 GAN-generated CXR images possess noticeable haziness, offering a promising approach to address the limited training data available for COVID-19 model training. When the dataset was expanded, CNN-based classification models outperformed other models, yielding higher accuracy rates than those of the initial dataset and other augmentation techniques. Among these models, ImagNet exhibited the best recognition accuracy of 99.70% on the testing set. These findings suggest that the proposed augmentation method is a solution to address overfitting issues in disease identification and can enhance identification accuracy effectively.

Keywords: classification, deep learning, medical images, CXR, GAN.

Procedia PDF Downloads 71
10505 A Resource-Based Perspective on Job Crafting Consequences: An Empirical Study from China

Authors: Eko Liao, Cheryl Zhang

Abstract:

Employee job crafting refers to employee’s proactive behaviors of making customized changes to their jobs on cognitive, relationship, and task levels. Previous studies have investigated different situations triggering employee’s job crafting. However, much less is known about what would be the consequences for both employee themselves and their work groups. Guided by conservation of resources theory (COR), this study investigates how employees job crafting increases their objective task performance and promotive voice behaviors at work. It is argued that employee would gain more resources when they actively craft their job tasks, which in turn increase their job performance and encourage them to have more constructive speak-up behaviors. Specifically, employee’s psychological resources (i.e., job engagement) and relational resources (i.e., leader-member relationships) would be enhanced from effective crafting behaviors, because employees are more likely to regard their job tasks as meaningful, and their leaders would be more likely to notice and recognize their dedication at work when employees craft their job frequently. To test this research model, around 400 employees from various Chinese organizations from mainland China joins the two-wave data collection stage. Employee’s job crafting behaviors in three aspects are measured at time 1. Perception of resource gain (job engagement and leader-member exchange), voice, and job performance are measured at time 2. The research model is generally supported. This study contributes to the job crafting literature by broadening the theoretical lens to a resource-based perspective. It also has practical implications that organizations should pay more attention to employee crafting behaviors because they are closely related to employees in-role performance and constructive voice behaviors.

Keywords: job crafting, resource-based perspective, voice, job performance

Procedia PDF Downloads 153
10504 Developing a Test Specifications for an Internationalization Course: Environment for Health in Thai Context

Authors: Rungrawee Samawathdana, Aim-Utcha Wattanaburanon

Abstract:

Test specifications for open book or notes exams provide the essential information to identify the types of the test items with validity of the evaluations process. This article explains the purpose of test specifications and illustrates how to use it to help construct the approach of open book or notes exams. The complication of the course objectives is challenging for the test designing.

Keywords: course curriculum, environment for health, internationalization, test specifications

Procedia PDF Downloads 554
10503 DeepLig: A de-novo Computational Drug Design Approach to Generate Multi-Targeted Drugs

Authors: Anika Chebrolu

Abstract:

Mono-targeted drugs can be of limited efficacy against complex diseases. Recently, multi-target drug design has been approached as a promising tool to fight against these challenging diseases. However, the scope of current computational approaches for multi-target drug design is limited. DeepLig presents a de-novo drug discovery platform that uses reinforcement learning to generate and optimize novel, potent, and multitargeted drug candidates against protein targets. DeepLig’s model consists of two networks in interplay: a generative network and a predictive network. The generative network, a Stack- Augmented Recurrent Neural Network, utilizes a stack memory unit to remember and recognize molecular patterns when generating novel ligands from scratch. The generative network passes each newly created ligand to the predictive network, which then uses multiple Graph Attention Networks simultaneously to forecast the average binding affinity of the generated ligand towards multiple target proteins. With each iteration, given feedback from the predictive network, the generative network learns to optimize itself to create molecules with a higher average binding affinity towards multiple proteins. DeepLig was evaluated based on its ability to generate multi-target ligands against two distinct proteins, multi-target ligands against three distinct proteins, and multi-target ligands against two distinct binding pockets on the same protein. With each test case, DeepLig was able to create a library of valid, synthetically accessible, and novel molecules with optimal and equipotent binding energies. We propose that DeepLig provides an effective approach to design multi-targeted drug therapies that can potentially show higher success rates during in-vitro trials.

Keywords: drug design, multitargeticity, de-novo, reinforcement learning

Procedia PDF Downloads 72
10502 Multiscale Simulation of Absolute Permeability in Carbonate Samples Using 3D X-Ray Micro Computed Tomography Images Textures

Authors: M. S. Jouini, A. Al-Sumaiti, M. Tembely, K. Rahimov

Abstract:

Characterizing rock properties of carbonate reservoirs is highly challenging because of rock heterogeneities revealed at several length scales. In the last two decades, the Digital Rock Physics (DRP) approach was implemented successfully in sandstone rocks reservoirs in order to understand rock properties behaviour at the pore scale. This approach uses 3D X-ray Microtomography images to characterize pore network and also simulate rock properties from these images. Even though, DRP is able to predict realistic rock properties results in sandstone reservoirs it is still suffering from a lack of clear workflow in carbonate rocks. The main challenge is the integration of properties simulated at different scales in order to obtain the effective rock property of core plugs. In this paper, we propose several approaches to characterize absolute permeability in some carbonate core plugs samples using multi-scale numerical simulation workflow. In this study, we propose a procedure to simulate porosity and absolute permeability of a carbonate rock sample using textures of Micro-Computed Tomography images. First, we discretize X-Ray Micro-CT image into a regular grid. Then, we use a textural parametric model to classify each cell of the grid using supervised classification. The main parameters are first and second order statistics such as mean, variance, range and autocorrelations computed from sub-bands obtained after wavelet decomposition. Furthermore, we fill permeability property in each cell using two strategies based on numerical simulation values obtained locally on subsets. Finally, we simulate numerically the effective permeability using Darcy’s law simulator. Results obtained for studied carbonate sample shows good agreement with the experimental property.

Keywords: multiscale modeling, permeability, texture, micro-tomography images

Procedia PDF Downloads 172
10501 Moving toward Language Acquisition: A Case Study Adapting and Applying Laban Movement Analysis in the International English as an Additional Language Classroom

Authors: Andra Yount

Abstract:

The purpose of this research project is to understand how focusing on movement can help English language learners acquire better reading, writing, and speaking skills. More specifically, this case study tests how Laban movement analysis, a tool often used in dance and physical education classes, contributes to advanced-level high school students’ English language acquisition at an international Swiss boarding school. This article shares theoretical bases for and findings from a teaching experiment in which LMA categories (body, effort, space, and shape) were adapted and introduced to students to encourage basic language acquisition and also cultural awareness and sensitivity. As part of the participatory action research process, data collection included pseudonym-protected questionnaires and written/video-taped responses to LMA language and task prompts. Responses from 43 participants were evaluated to determine the efficacy of using this system. Participants (ages 16-19) were enrolled in advanced English as an Additional Language (EAL) courses at a private, co-educational Swiss international boarding school. Final data analysis revealed that drawing attention to movement using LMA language as a stimulus creates better self-awareness and understanding/retention of key literary concepts and vocabulary but does not necessarily contribute to greater cultural sensitivity or eliminate the use of problematic (sexist, racist, or classist) language. Possibilities for future exploration and development are also explored.

Keywords: dance, English, Laban, pedagogy

Procedia PDF Downloads 129
10500 Application of Mathematical Models for Conducting Long-Term Metal Fume Exposure Assessments for Workers in a Shipbuilding Factory

Authors: Shu-Yu Chung, Ying-Fang Wang, Shih-Min Wang

Abstract:

To conduct long-term exposure assessments are important for workers exposed to chemicals with chronic effects. However, it usually encounters with several constrains, including cost, workers' willingness, and interference to work practice, etc., leading to inadequate long-term exposure data in the real world. In this study, an integrated approach was developed for conducting long-term exposure assessment for welding workers in a shipbuilding factory. A laboratory study was conducted to yield the fume generation rates under various operating conditions. The results and the measured environmental conditions were applied to the near field/far field (NF/FF) model for predicting long term fume exposures via the Monte Carlo simulation. Then, the predicted long-term concentrations were used to determine the prior distribution in Bayesian decision analysis (BDA). Finally, the resultant posterior distributions were used to assess the long-term exposure and serve as basis for initiating control strategies for shipbuilding workers. Results show that the NF/FF model was a suitable for predicting the exposures of metal contents containing in welding fume. The resultant posterior distributions could effectively assess the long-term exposures of shipbuilding welders. Welders' long-term Fe, Mn and Pb exposures were found with high possibilities to exceed the action level indicating preventive measures should be taken for reducing welders' exposures immediately. Though the resultant posterior distribution can only be regarded as the best solution based on the currently available predicting and monitoring data, the proposed integrated approach can be regarded as a possible solution for conducting long term exposure assessment in the field.

Keywords: Bayesian decision analysis, exposure assessment, near field and far field model, shipbuilding industry, welding fume

Procedia PDF Downloads 125
10499 Quantitative Analysis of Camera Setup for Optical Motion Capture Systems

Authors: J. T. Pitale, S. Ghassab, H. Ay, N. Berme

Abstract:

Biomechanics researchers commonly use marker-based optical motion capture (MoCap) systems to extract human body kinematic data. These systems use cameras to detect passive or active markers placed on the subject. The cameras use triangulation methods to form images of the markers, which typically require each marker to be visible by at least two cameras simultaneously. Cameras in a conventional optical MoCap system are mounted at a distance from the subject, typically on walls, ceiling as well as fixed or adjustable frame structures. To accommodate for space constraints and as portable force measurement systems are getting popular, there is a need for smaller and smaller capture volumes. When the efficacy of a MoCap system is investigated, it is important to consider the tradeoff amongst the camera distance from subject, pixel density, and the field of view (FOV). If cameras are mounted relatively close to a subject, the area corresponding to each pixel reduces, thus increasing the image resolution. However, the cross section of the capture volume also decreases, causing reduction of the visible area. Due to this reduction, additional cameras may be required in such applications. On the other hand, mounting cameras relatively far from the subject increases the visible area but reduces the image quality. The goal of this study was to develop a quantitative methodology to investigate marker occlusions and optimize camera placement for a given capture volume and subject postures using three-dimension computer-aided design (CAD) tools. We modeled a 4.9m x 3.7m x 2.4m (LxWxH) MoCap volume and designed a mounting structure for cameras using SOLIDWORKS (Dassault Systems, MA, USA). The FOV was used to generate the capture volume for each camera placed on the structure. A human body model with configurable posture was placed at the center of the capture volume on CAD environment. We studied three postures; initial contact, mid-stance, and early swing. The human body CAD model was adjusted for each posture based on the range of joint angles. Markers were attached to the model to enable a full body capture. The cameras were placed around the capture volume at a maximum distance of 2.7m from the subject. We used the Camera View feature in SOLIDWORKS to generate images of the subject as seen by each camera and the number of markers visible to each camera was tabulated. The approach presented in this study provides a quantitative method to investigate the efficacy and efficiency of a MoCap camera setup. This approach enables optimization of a camera setup through adjusting the position and orientation of cameras on the CAD environment and quantifying marker visibility. It is also possible to compare different camera setup options on the same quantitative basis. The flexibility of the CAD environment enables accurate representation of the capture volume, including any objects that may cause obstructions between the subject and the cameras. With this approach, it is possible to compare different camera placement options to each other, as well as optimize a given camera setup based on quantitative results.

Keywords: motion capture, cameras, biomechanics, gait analysis

Procedia PDF Downloads 299
10498 Arabic Light Word Analyser: Roles with Deep Learning Approach

Authors: Mohammed Abu Shquier

Abstract:

This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.

Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN

Procedia PDF Downloads 25
10497 Aerodynamic Heating Analysis of Hypersonic Flow over Blunt-Nosed Bodies Using Computational Fluid Dynamics

Authors: Aakash Chhunchha, Assma Begum

Abstract:

The qualitative aspects of hypersonic flow over a range of blunt bodies have been extensively analyzed in the past. It is well known that the curvature of a body’s geometry in the sonic region predominantly dictates the bow shock shape and its standoff distance from the body, while the surface pressure distribution depends on both the sonic region and on the local body shape. The present study is an extension to analyze the hypersonic flow characteristics over several blunt-nosed bodies using modern Computational Fluid Dynamics (CFD) tools to determine the shock shape and its effect on the heat flux around the body. 4 blunt-nosed models with cylindrical afterbodies were analyzed for a flow at a Mach number of 10 corresponding to the standard atmospheric conditions at an altitude of 50 km. The nose radii of curvature of the models range from a hemispherical nose to a flat nose. Appropriate numerical models and the supplementary convergence techniques that were implemented for the CFD analysis are thoroughly described. The flow contours are presented highlighting the key characteristics of shock wave shape, shock standoff distance and the sonic point shift on the shock. The variation of heat flux, due to different shock detachments for various models is comprehensively discussed. It is observed that the more the bluntness of the nose radii, the farther the shock stands from the body; and consequently, the less the surface heating at the nose. The results obtained from the CFD analyses are compared with approximated theoretical engineering correlations. Overall, a satisfactory agreement is observed between the two.

Keywords: aero-thermodynamics, blunt-nosed bodies, computational fluid dynamics (CFD), hypersonic flow

Procedia PDF Downloads 127
10496 Simulating Studies on Phosphate Removal from Laundry Wastewater Using Biochar: Dudinin Approach

Authors: Eric York, James Tadio, Silas Owusu Antwi

Abstract:

Laundry wastewater contains a diverse range of chemical pollutants that can have detrimental effects on human health and the environment. In this study, simulation studies by Spyder Python software v 3.2 to assess the efficacy of biochar in removing PO₄³⁻ from wastewater were conducted. Through modeling and simulation, the mechanisms involved in the adsorption process of phosphate by biochar were studied by altering variables which is specific to the phosphate from common laundry phosphate detergents, such as the aqueous solubility, initial concentration, and temperature using the Dudinin Approach (DA). Results showed that the concentration equilibrate at near the highest concentrations for Sugar beet-120 mgL⁻¹, Tailing-85 mgL⁻¹, CaO- rich-50 mgL⁻¹, Eggshell and rice straw-48 mgL⁻¹, Undaria Pinnatifida Roots-190 mgL⁻¹, Ca-Alginate Granular Beads -240 mgL⁻¹, Laminaria Japonica Powder -900 mgL⁻¹, Pinesaw dust-57 mgL⁻¹, Ricehull-190 mgL⁻¹, sesame straw- 470 mgL⁻¹, Sugar Bagasse-380 mgL⁻¹, Miscanthus Giganteus-240 mgL⁻¹, Wood Bc-130 mgL⁻¹, Pine-25 mgL⁻¹, Sawdust-6.8 mgL⁻¹, Sewage Sludge-, Rice husk-12 mgL⁻¹, Corncob-117 mgL⁻¹, Maize straw- 1800 mgL⁻¹ while Peanut -Eucalyptus polybractea-, Crawfish equilibrated at near concentration. CO₂ activated Thalia, sewage sludge biochar, Broussonetia Papyrifera Leaves equilibrated just at the lower concentration. Only Soyer bean Stover exhibited a sharp rise and fall peak in mid-concentration at 2 mgL⁻¹ volume. The modelling results were consistent with experimental findings from the literature, ensuring the accuracy, repeatability, and reliability of the simulation study. The simulation study provided insights into adsorption for PO₄³⁻ from wastewater by biochar using concentration per volume that can be adsorbed ideally under the given conditions. Studies showed that applying the principle experimentally in real wastewater with all its complexity is warranted and not far-fetched.

Keywords: simulation studies, phosphate removal, biochar, adsorption, wastewater treatment

Procedia PDF Downloads 102
10495 Multi Biomertric Personal Identification System Based On Hybird Intellegence Method

Authors: Laheeb M. Ibrahim, Ibrahim A. Salih

Abstract:

Biometrics is a technology that has been widely used in many official and commercial identification applications. The increased concerns in security during recent years (especially during the last decades) have essentially resulted in more attention being given to biometric-based verification techniques. Here, a novel fusion approach of palmprint, dental traits has been suggested. These traits which are authentication techniques have been employed in a range of biometric applications that can identify any postmortem PM person and antemortem AM. Besides improving the accuracy, the fusion of biometrics has several advantages such as increasing, deterring spoofing activities and reducing enrolment failure. In this paper, a first unimodel biometric system has been made by using (palmprint and dental) traits, for each one classification applying an artificial neural network and a hybrid technique that combines swarm intelligence and neural network together, then attempt has been made to combine palmprint and dental biometrics. Principally, the fusion of palmprint and dental biometrics and their potential application has been explored as biometric identifiers. To address this issue, investigations have been carried out about the relative performance of several statistical data fusion techniques for integrating the information in both unimodal and multimodal biometrics. Also the results of the multimodal approach have been compared with each one of these two traits authentication approaches. This paper studies the features and decision fusion levels in multimodal biometrics. To determine the accuracy of GAR to parallel system decision-fusion including (AND, OR, Majority fating) has been used. The backpropagation method has been used for classification and has come out with result (92%, 99%, 97%) respectively for GAR, while the GAR) for this algorithm using hybrid technique for classification (95%, 99%, 98%) respectively. To determine the accuracy of the multibiometric system for feature level fusion has been used, while the same preceding methods have been used for classification. The results have been (98%, 99%) respectively while to determine the GAR of feature level different methods have been used and have come out with (98%).

Keywords: back propagation neural network BP ANN, multibiometric system, parallel system decision-fusion, practical swarm intelligent PSO

Procedia PDF Downloads 522
10494 Adding a Few Language-Level Constructs to Improve OOP Verifiability of Semantic Correctness

Authors: Lian Yang

Abstract:

Object-oriented programming (OOP) is the dominant programming paradigm in today’s software industry and it has literally enabled average software developers to develop millions of commercial strength software applications in the era of INTERNET revolution over the past three decades. On the other hand, the lack of strict mathematical model and domain constraint features at the language level has long perplexed the computer science academia and OOP engineering community. This situation resulted in inconsistent system qualities and hard-to-understand designs in some OOP projects. The difficulties with regards to fix the current situation are also well known. Although the power of OOP lies in its unbridled flexibility and enormously rich data modeling capability, we argue that the ambiguity and the implicit facade surrounding the conceptual model of a class and an object should be eliminated as much as possible. We listed the five major usage of class and propose to separate them by proposing new language constructs. By using well-established theories of set and FSM, we propose to apply certain simple, generic, and yet effective constraints at OOP language level in an attempt to find a possible solution to the above-mentioned issues regarding OOP. The goal is to make OOP more theoretically sound as well as to aid programmers uncover warning signs of irregularities and domain-specific issues in applications early on the development stage and catch semantic mistakes at runtime, improving correctness verifiability of software programs. On the other hand, the aim of this paper is more practical than theoretical.

Keywords: new language constructs, set theory, FSM theory, user defined value type, function groups, membership qualification attribute (MQA), check-constraint (CC)

Procedia PDF Downloads 228
10493 Kantian Epistemology in Examination of the Axiomatic Principles of Economics: The Synthetic a Priori in the Economic Structure of Society

Authors: Mirza Adil Ahmad Mughal

Abstract:

Transcendental analytics, in the critique of pure reason, combines space and time as conditions of the possibility of the phenomenon from the transcendental aesthetic with the pure magnitude-intuition notion. The property of continuity as a qualitative result of the additive magnitude brings the possibility of connecting with experience, even though only as a potential because of the a priori necessity from assumption, as syntheticity of the a priori task of a scientific method of philosophy given by Kant, which precludes the application of categories to something not empirically reducible to the content of such a category's corresponding and possible object. This continuity as the qualitative result of a priori constructed notion of magnitude lies as a fundamental assumption and property of, what in Microeconomic theory is called as, 'choice rules' which combine the potentially-empirical and practical budget-price pairs with preference relations. This latter result is the purest qualitative side of the choice rules', otherwise autonomously, quantitative nature. The theoretical, barring the empirical, nature of this qualitative result is a synthetic a priori truth, which, if at all, it should be, if the axiomatic structure of the economic theory is held to be correct. It has a potentially verifiable content as its possible object in the form of quantitative price-budget pairs. Yet, the object that serves the respective Kantian category is qualitative itself, which is utility. This article explores the validity of Kantian qualifications for this application of 'categories' to the economic structure of society.

Keywords: categories of understanding, continuity, convexity, psyche, revealed preferences, synthetic a priori

Procedia PDF Downloads 87
10492 Intelligent Process and Model Applied for E-Learning Systems

Authors: Mafawez Alharbi, Mahdi Jemmali

Abstract:

E-learning is a developing area especially in education. E-learning can provide several benefits to learners. An intelligent system to collect all components satisfying user preferences is so important. This research presents an approach that it capable to personalize e-information and give the user their needs following their preferences. This proposal can make some knowledge after more evaluations made by the user. In addition, it can learn from the habit from the user. Finally, we show a walk-through to prove how intelligent process work.

Keywords: artificial intelligence, architecture, e-learning, software engineering, processing

Procedia PDF Downloads 175
10491 Effect of Sintering Time and Porosity on Microstructure, Mechanical and Corrosion Properties of Ti6Al15Mo Alloy for Implant Applications

Authors: Jyotsna Gupta, S. Ghosh, S. Aravindan

Abstract:

The requirement of artificial prostheses (such as hip and knee joints) has increased with time. Many researchers are working to develop new implants with improved properties such as excellent biocompatibility with no tissue reactions, corrosion resistance in body fluid, high yield strength and low elastic modulus. Further, the morphological properties of the artificial implants should also match with that of the human bone so that cell adhesion, proliferation and transportation of the minerals and nutrition through body fluid can be obtained. Present study attempts to make porous Ti6Al15Mo alloys through powder metallurgy route using space holder technique. The alloy consists of 6wt% of Al which was taken as α phase stabilizer and 15wt% Mo was taken as β phase stabilizer with theoretical density 4.708. Ammonium hydrogen carbonate is used as a space holder in order to generate the porosity. The porosity of these fabricated porous alloys was controlled by adding the 0, 50, 70 vol.% of the space holder content. Three phases were found in the microstructure: α, α_2 and β phase of titanium. Kirkendall pores are observed to be decreased with increase of holding time during sintering and parallelly compressive strength and elastic modulus value increased slightly. Compressive strength and elastic modulus of porous Ti-6Al-15Mo alloy (1.17 g/cm3 density) is found to be suitable for cancellous bone. Released ions from Ti-6Al-15Mo alloy are far below from the permissible limits in human body.

Keywords: bone implant, powder metallurgy, sintering time, Ti-6Al-15Mo

Procedia PDF Downloads 133
10490 Comparative Literature, Postcolonialism and the “African World” in Wole Soyinka’s Myth, Literature and the African World

Authors: Karen de Andrade

Abstract:

Literature is generally understood as an aesthetic creation, an artistic object that relates to the history and sociocultural paradigms of a given era. Moreover, through it, we can dwell on the deepest reflections on the human condition. It can also be used to propagate projects of domination, as Edward Said points out in his book Culture and Imperialism, connecting narrative, history and land conquest. Having said that, the aim of this paper is to analyse how Wole Soyinka elaborated his main theoretical work, Myth, Literature and African World, a collection of essays published in 1976, by comparing the philosophical, ideological and aesthetic practices of African, diasporic and European writers from the point of view of the Yoruba tradition, to which he belongs. Moreover, Soyinka believes that (literary) art has an important function in the formation of a people, in the construction of its political identity and in cultural regeneration, especially after the independence. The author's critical endeavour is that of attempting to construct a past. For him, the "African World" is not a mere allegory of the continent, and to understand it in this way would be to perpetuate a colonialist vision that would deny the subjectivities that cross black cultures, history and bodies. For him, comparative literature can be used not to "equate" local African texts with the European canon; but rather to recognise that they have aesthetic value and socio-cultural importance. Looking at the local, the particular and specific to each culture is, according to Soyinka, appropriate for dealing with African cultures, as opposed to abstractions of dialectical materialism or capitalist nationalism. In view of this, in his essays, the author creates a possibility for artistic and social reflection beyond the logic of Western politics.

Keywords: comparative literature, African Literature, Literary Theory, Yoruba Mythology, Wole Soyinka, Afrodiaspora

Procedia PDF Downloads 50