Search results for: software tools
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8261

Search results for: software tools

1241 The Ethical Imperative of Corporate Social Responsibility Practice and Disclosure by Firms in Nigeria Delta Swamplands: A Qualitative Analysis

Authors: Augustar Omoze Ehighalua, Itotenaan Henry Ogiri

Abstract:

As a mono-product economy, Nigeria relies largely on oil revenues for its foreign exchange earnings and the exploration activities of firms operating in the Niger Delta region have left in its wake tales of environmental degradation, poverty and misery. This, no doubt, have created corporate social responsibility issues in the region. The focus of this research is the critical evaluation of the ethical response to Corporate Social Responsibility (CSR) practice by firms operating in Nigeria Delta Swamplands. While CSR is becoming more popular in developed society with effective practice guidelines and reporting benchmark, there is a relatively low level of awareness and selective applicability of existing international guidelines to effectively support CSR practice in Nigeria. This study, haven identified the lack of CSR institutional framework attempts to develop an ethically-driven CSR transparency benchmark laced within a regulatory framework based on international best practices. The research adopts a qualitative methodology and makes use of primary data collected through semi-structured interviews conducted across the six core states of the Niger Delta Region. More importantly, the study adopts an inductive, interpretivist philosophical paradigm that reveal deep phenomenological insights into what local communities, civil society and government officials consider as good ethical benchmark for responsible CSR practice by organizations. The institutional theory provides for the main theoretical foundation, complemented by the stakeholder and legitimacy theories. The Nvivo software was used to analyze the data collected. This study shows that ethical responsibility is lacking in CSR practice by firms in the Niger Delta Region of Nigeria. Furthermore, findings of the study indicate key issues of environmental, health and safety, human rights, and labour as fundamental in developing an effective CSR practice guideline for Nigeria. The study has implications for public policy formulation as well as managerial perspective.

Keywords: corporate social responsibility, CSR, ethics, firms, Niger-Delta Swampland, Nigeria

Procedia PDF Downloads 105
1240 Governance in the Age of Artificial intelligence and E- Government

Authors: Mernoosh Abouzari, Shahrokh Sahraei

Abstract:

Electronic government is a way for governments to use new technology that provides people with the necessary facilities for proper access to government information and services, improving the quality of services and providing broad opportunities to participate in democratic processes and institutions. That leads to providing the possibility of easy use of information technology in order to distribute government services to the customer without holidays, which increases people's satisfaction and participation in political and economic activities. The expansion of e-government services and its movement towards intelligentization has the ability to re-establish the relationship between the government and citizens and the elements and components of the government. Electronic government is the result of the use of information and communication technology (ICT), which by implementing it at the government level, in terms of the efficiency and effectiveness of government systems and the way of providing services, tremendous commercial changes are created, which brings people's satisfaction at the wide level will follow. The main level of electronic government services has become objectified today with the presence of artificial intelligence systems, which recent advances in artificial intelligence represent a revolution in the use of machines to support predictive decision-making and Classification of data. With the use of deep learning tools, artificial intelligence can mean a significant improvement in the delivery of services to citizens and uplift the work of public service professionals while also inspiring a new generation of technocrats to enter government. This smart revolution may put aside some functions of the government, change its components, and concepts such as governance, policymaking or democracy will change in front of artificial intelligence technology, and the top-down position in governance may face serious changes, and If governments delay in using artificial intelligence, the balance of power will change and private companies will monopolize everything with their pioneering in this field, and the world order will also depend on rich multinational companies and in fact, Algorithmic systems will become the ruling systems of the world. It can be said that currently, the revolution in information technology and biotechnology has been started by engineers, large economic companies, and scientists who are rarely aware of the political complexities of their decisions and certainly do not represent anyone. Therefore, it seems that if liberalism, nationalism, or any other religion wants to organize the world of 2050, it should not only rationalize the concept of artificial intelligence and complex data algorithm but also mix them in a new and meaningful narrative. Therefore, the changes caused by artificial intelligence in the political and economic order will lead to a major change in the way all countries deal with the phenomenon of digital globalization. In this paper, while debating the role and performance of e-government, we will discuss the efficiency and application of artificial intelligence in e-government, and we will consider the developments resulting from it in the new world and the concepts of governance.

Keywords: electronic government, artificial intelligence, information and communication technology., system

Procedia PDF Downloads 93
1239 PbLi Activation Due to Corrosion Products in WCLL BB (EU-DEMO) and Its Impact on Reactor Design and Recycling

Authors: Nicole Virgili, Marco Utili

Abstract:

The design of the Breeding Blanket in Tokamak fusion energy systems has to guarantee sufficient availability in addition to its functions, that are, tritium breeding self-sufficiency, power extraction and shielding (the magnets and the VV). All these function in the presence of extremely harsh operating conditions in terms of heat flux and neutron dose as well as chemical environment of the coolant and breeder that challenge structural materials (structural resistance and corrosion resistance). The movement and activation of fluids from the BB to the Ex-vessel components in a fusion power plant have an important radiological consideration because flowing material can carry radioactivity to safety-critical areas. This includes gamma-ray emission from activated fluid and activated corrosion products, and secondary activation resulting from neutron emission, with implication for the safety of maintenance personnel and damage to electrical and electronic equipment. In addition to the PbLi breeder activation, it is important to evaluate the contribution due to the activated corrosion products (ACPs) dissolved in the lead-lithium eutectic alloy, at different concentration levels. Therefore, the purpose of the study project is to evaluate the PbLi activity utilizing the FISPACT II inventory code. Emphasis is given on how the design of the EU-DEMO WCLL, and potential recycling of the breeder material will be impacted by the activation of PbLi and the associated active corrosion products (ACPs). For this scope the following Computational Tools, Data and Geometry have been considered: • Neutron source: EU-DEMO neutron flux < 1014/cm2/s • Neutron flux distribution in equatorial breeding blanket module (BBM) #13 in the WCLL BB outboard central zone, which is the most activated zone, with the aim to introduce a conservative component utilizing MNCP6. • The recommended geometry model: 2017 EU DEMO CAD model. • Blanket Module Material Specifications (Composition) • Activation calculations for different ACP concentration levels in the PbLi breeder, with a given chemistry in stationary equilibrium conditions, using FISPACT II code. Results suggest that there should be a waiting time of about 10 years from the shut-down (SD) to be able to safely manipulate the PbLi for recycling operations with simple shielding requirements. The dose rate is mainly given by the PbLi and the ACP concentration (x1 or x 100) does not shift the result. In conclusion, the results show that there is no impact on PbLi activation due to ACPs levels.

Keywords: activation, corrosion products, recycling, WCLL BB., PbLi

Procedia PDF Downloads 124
1238 Accomplishing Mathematical Tasks in Bilingual Primary Classrooms

Authors: Gabriela Steffen

Abstract:

Learning in a bilingual classroom not only implies learning in two languages or in an L2, it also means learning content subjects through the means of bilingual or plurilingual resources, which is of a qualitatively different nature than ‘monolingual’ learning. These resources form elements of a didactics of plurilingualism, aiming not only at the development of a plurilingual competence, but also at drawing on plurilingual resources for nonlinguistic subject learning. Applying a didactics of plurilingualism allows for taking account of the specificities of bilingual content subject learning in bilingual education classrooms. Bilingual education is used here as an umbrella term for different programs, such as bilingual education, immersion, CLIL, bilingual modules in which one or several non-linguistic subjects are taught partly or completely in an L2. This paper aims at discussing first results of a study on pupil group work in bilingual classrooms in several Swiss primary schools. For instance, it analyses two bilingual classes in two primary schools in a French-speaking region of Switzerland that follows a part of their school program through German in addition to French, the language of instruction in this region. More precisely, it analyses videotaped classroom interaction and in situ classroom practices of pupil group work in a mathematics lessons. The ethnographic observation of pupils’ group work and the analysis of their interaction (analytical tools of conversational analysis, discourse analysis and plurilingual interaction) enhance the description of whole-class interaction done in the same (and several other) classes. While the latter are teacher-student interactions, the former are student-student interactions giving more space to and insight into pupils’ talk. This study aims at the description of the linguistic and multimodal resources (in German L2 and/or French L1) pupils mobilize while carrying out a mathematical task. The analysis shows that the accomplishment of the mathematical task takes place in a bilingual mode, whether the whole-class interactions are conducted rather in a bilingual (German L2-French L1) or a monolingual mode in L2 (German). The pupils make plenty of use of German L2 in a setting that lends itself to use French L1 (peer groups with French as a dominant language, in absence of the teacher and a task with a mathematical aim). They switch from French to German and back ‘naturally’, which is regular for bilingual speakers. Their linguistic resources in German L2 are not sufficient to allow them to (inter-)act well enough to accomplish the task entirely in German L2, despite their efforts to do so. However, this does not stop them from carrying out the task in mathematics adequately, which is the main objective, by drawing on the bilingual resources at hand.

Keywords: bilingual content subject learning, bilingual primary education, bilingual pupil group work, bilingual teaching/learning resources, didactics of plurilingualism

Procedia PDF Downloads 156
1237 Role of Alternative Dispute Resolution (ADR) in Advancing UN-SDG 16 and Pathways to Justice in Kenya: Opportunities and Challenges

Authors: Thomas Njuguna Kibutu

Abstract:

The ability to access justice is an important facet of securing peaceful, just, and inclusive societies, as recognized by Goal 16 of the 2030 Agenda for Sustainable Development. Goal 16 calls for peace, justice, and strong institutions to promote the rule of law and access to justice at a global level. More specifically, Target 16.3 of the Goal aims to promote the rule of law at the national and international levels and ensure equal access to justice for all. On the other hand, it is now widely recognized that Alternative Dispute Resolution (hereafter, ADR) represents an efficient mechanism for resolving disputes outside the adversarial conventional court system of litigation or prosecution. ADR processes include but are not limited to negotiation, reconciliation, mediation, arbitration, and traditional conflict resolution. ADR has a number of advantages, including being flexible, cost-efficient, time-effective, and confidential, and giving the parties more control over the process and the results, thus promoting restorative justice. The methodology of this paper is a desktop review of books, journal articles, reports and government documents., among others. The paper recognizes that ADR represents a cornerstone of Africa’s, and more specifically, Kenya’s, efforts to promote inclusive, accountable, and effective institutions and achieve the objectives of goal 16. In Kenya, and not unlike many African countries, there has been an outcry over the backlog of cases that are yet to be resolved in the courts and the statistics have shown that the numbers keep on rising. While ADR mechanisms have played a major role in reducing these numbers, access to justice in the country remains a big challenge, especially to the subaltern. There is, therefore, a need to analyze the opportunities and challenges facing the application of ADR mechanisms as tools for accessing justice in Kenya and further discuss various ways in which we can overcome these challenges to make ADR an effective alternative to dispute resolution. The paper argues that by embracing ADR across various sectors and addressing existing shortcomings, Kenya can, over time, realize its vision of a more just and equitable society. This paper discusses the opportunities and challenges of the application of ADR in Kenya with a view to sharing the lessons and challenges with the wider African continent. The paper concludes that ADR mechanisms can provide critical pathways to justice in Kenya and the African continent in general but come with distinct challenges. The paper thus calls for concerted efforts of respective stakeholders to overcome these challenges.

Keywords: mediation, arbitration, negotiation, reconsiliation, Traditional conflict resolution, sustainable development

Procedia PDF Downloads 25
1236 Uncertainty Evaluation of Erosion Volume Measurement Using Coordinate Measuring Machine

Authors: Mohamed Dhouibi, Bogdan Stirbu, Chabotier André, Marc Pirlot

Abstract:

Internal barrel wear is a major factor affecting the performance of small caliber guns in their different life phases. Wear analysis is, therefore, a very important process for understanding how wear occurs, where it takes place, and how it spreads with the aim on improving the accuracy and effectiveness of small caliber weapons. This paper discusses the measurement and analysis of combustion chamber wear for a small-caliber gun using a Coordinate Measuring Machine (CMM). Initially, two different NATO small caliber guns: 5.56x45mm and 7.62x51mm, are considered. A Micura Zeiss Coordinate Measuring Machine (CMM) equipped with the VAST XTR gold high-end sensor is used to measure the inner profile of the two guns every 300-shot cycle. The CMM parameters, such us (i) the measuring force, (ii) the measured points, (iii) the time of masking, and (iv) the scanning velocity, are investigated. In order to ensure minimum measurement error, a statistical analysis is adopted to select the reliable CMM parameters combination. Next, two measurement strategies are developed to capture the shape and the volume of each gun chamber. Thus, a task-specific measurement uncertainty (TSMU) analysis is carried out for each measurement plan. Different approaches of TSMU evaluation have been proposed in the literature. This paper discusses two different techniques. The first is the substitution method described in ISO 15530 part 3. This approach is based on the use of calibrated workpieces with similar shape and size as the measured part. The second is the Monte Carlo simulation method presented in ISO 15530 part 4. Uncertainty evaluation software (UES), also known as the Virtual Coordinate Measuring Machine (VCMM), is utilized in this technique to perform a point-by-point simulation of the measurements. To conclude, a comparison between both approaches is performed. Finally, the results of the measurements are verified through calibrated gauges of several dimensions specially designed for the two barrels. On this basis, an experimental database is developed for further analysis aiming to quantify the relationship between the volume of wear and the muzzle velocity of small caliber guns.

Keywords: coordinate measuring machine, measurement uncertainty, erosion and wear volume, small caliber guns

Procedia PDF Downloads 146
1235 Challenges for Competency-Based Learning Design in Primary School Mathematics in Mozambique

Authors: Satoshi Kusaka

Abstract:

The term ‘competency’ is attracting considerable scholarly attention worldwide with the advance of globalization in the 21st century and with the arrival of a knowledge-based society. In the current world environment, familiarity with varied disciplines is regarded to be vital for personal success. The idea of a competency-based educational system was mooted by the ‘Definition and Selection of Competencies (DeSeCo)’ project that was conducted by the Organization for Economic Cooperation and Development (OECD). Further, attention to this topic is not limited to developed countries; it can also be observed in developing countries. For instance, the importance of a competency-based curriculum was mentioned in the ‘2013 Harmonized Curriculum Framework for the East African Community’, which recommends key competencies that should be developed in primary schools. The introduction of such curricula and the reviews of programs are actively being executed, primarily in the East African Community but also in neighboring nations. Taking Mozambique as a case in point, the present paper examines the conception of ‘competency’ as a target of frontline education in developing countries. It also aims to discover the manner in which the syllabus, textbooks and lessons, among other things, in primary-level math education are developed and to determine the challenges faced in the process. This study employs the perspective of competency-based education design to analyze how the term ‘competency’ is defined in the primary-level math syllabus, how it is reflected in the textbooks, and how the lessons are actually developed. ‘Practical competency’ is mentioned in the syllabus, and the description of the term lays emphasis on learners' ability to interactively apply socio-cultural and technical tools, which is one of the key competencies that are advocated in OECD's ‘Definition and Selection of Competencies’ project. However, most of the content of the textbooks pertains to ‘basic academic ability’, and in actual classroom practice, teachers often impart lessons straight from the textbooks. It is clear that the aptitude of teachers and their classroom routines are greatly dependent on the cultivation of their own ‘practical competency’ as it is defined in the syllabus. In other words, there is great divergence between the ‘syllabus’, which is the intended curriculum, and the content of the ‘textbooks’. In fact, the material in the textbooks should serve as the bridge between the syllabus, which forms the guideline, and the lessons, which represent the ‘implemented curriculum’. Moreover, the results obtained from this investigation reveal that the problem can only be resolved through the cultivation of ‘practical competency’ in teachers, which is currently not sufficient.

Keywords: competency, curriculum, mathematics education, Mozambique

Procedia PDF Downloads 186
1234 Limiting Freedom of Expression to Fight Radicalization: The 'Silencing' of Terrorists Does Not Always Allow Rights to 'Speak Loudly'

Authors: Arianna Vedaschi

Abstract:

This paper addresses the relationship between freedom of expression, national security and radicalization. Is it still possible to talk about a balance between the first two elements? Or, due to the intrusion of the third, is it more appropriate to consider freedom of expression as “permanently disfigured” by securitarian concerns? In this study, both the legislative and the judicial level are taken into account and the comparative method is employed in order to provide the reader with a complete framework of relevant issues and a workable set of solutions. The analysis moves from the finding according to which the tension between free speech and national security has become a major issue in democratic countries, whose very essence is continuously endangered by the ever-changing and multi-faceted threat of international terrorism. In particular, a change in terrorist groups’ recruiting pattern, attracting more and more people by way of a cutting-edge communicative strategy, often employing sophisticated technology as a radicalization tool, has called on law-makers to modify their approach to dangerous speech. While traditional constitutional and criminal law used to punish speech only if it explicitly and directly incited the commission of a criminal action (“cause-effect” model), so-called glorification offences – punishing mere ideological support for terrorism, often on the web – are becoming commonplace in the comparative scenario. Although this is direct, and even somehow understandable, consequence of the impending terrorist menace, this research shows many problematic issues connected to such a preventive approach. First, from a predominantly theoretical point of view, this trend negatively impacts on the already blurred line between permissible and prohibited speech. Second, from a pragmatic point of view, such legislative tools are not always suitable to keep up with ongoing developments of both terrorist groups and their use of technology. In other words, there is a risk that such measures become outdated even before their application. Indeed, it seems hard to still talk about a proper balance: what was previously clearly perceived as a balancing of values (freedom of speech v. public security) has turned, in many cases, into a hierarchy with security at its apex. In light of these findings, this paper concludes that such a complex issue would perhaps be better dealt with through a combination of policies: not only criminalizing ‘terrorist speech,’ which should be relegated to a last resort tool, but acting at an even earlier stage, i.e., trying to prevent dangerous speech itself. This might be done by promoting social cohesion and the inclusion of minorities, so as to reduce the probability of people considering terrorist groups as a “viable option” to deal with the lack of identification within their social contexts.

Keywords: radicalization, free speech, international terrorism, national security

Procedia PDF Downloads 196
1233 Computational Fluid Dynamics Simulation of Turbulent Convective Heat Transfer in Rectangular Mini-Channels for Rocket Cooling Applications

Authors: O. Anwar Beg, Armghan Zubair, Sireetorn Kuharat, Meisam Babaie

Abstract:

In this work, motivated by rocket channel cooling applications, we describe recent CFD simulations of turbulent convective heat transfer in mini-channels at different aspect ratios. ANSYS FLUENT software has been employed with a mean average error of 5.97% relative to Forrest’s MIT cooling channel study (2014) at a Reynolds number of 50,443 with a Prandtl number of 3.01. This suggests that the simulation model created for turbulent flow was suitable to set as a foundation for the study of different aspect ratios in the channel. Multiple aspect ratios were also considered to understand the influence of high aspect ratios to analyse the best performing cooling channel, which was determined to be the highest aspect ratio channels. Hence, the approximate 28:1 aspect ratio provided the best characteristics to ensure effective cooling. A mesh convergence study was performed to assess the optimum mesh density to collect accurate results. Hence, for this study an element size of 0.05mm was used to generate 579,120 for proper turbulent flow simulation. Deploying a greater bias factor would increase the mesh density to the furthest edges of the channel which would prove to be useful if the focus of the study was just on a single side of the wall. Since a bulk temperature is involved with the calculations, it is essential to ensure a suitable bias factor is used to ensure the reliability of the results. Hence, in this study we have opted to use a bias factor of 5 to allow greater mesh density at both edges of the channel. However, the limitations on mesh density and hardware have curtailed the sophistication achievable for the turbulence characteristics. Also only linear rectangular channels were considered, i.e. curvature was ignored. Furthermore, we only considered conventional water coolant. From this CFD study the variation of aspect ratio provided a deeper appreciation of the effect of small to high aspect ratios with regard to cooling channels. Hence, when considering an application for the channel, the geometry of the aspect ratio must play a crucial role in optimizing cooling performance.

Keywords: rocket channel cooling, ANSYS FLUENT CFD, turbulence, convection heat transfer

Procedia PDF Downloads 145
1232 Age Estimation from Upper Anterior Teeth by Pulp/Tooth Ratio Using Peri-Apical X-Rays among Egyptians

Authors: Fatma Mohamed Magdy Badr El Dine, Amr Mohamed Abd Allah

Abstract:

Introduction: Age estimation of individuals is one of the crucial steps in forensic practice. Different traditional methods rely on the length of the diaphysis of long bones of limbs, epiphyseal-diaphyseal union, fusion of the primary ossification centers as well as dental eruption. However, there is a growing need for the development of precise and reliable methods to estimate age, especially in cases where dismembered corpses, burnt bodies, purified or fragmented parts are recovered. Teeth are the hardest and indestructible structure in the human body. In recent years, assessment of pulp/tooth area ratio, as an indirect quantification of secondary dentine deposition has received a considerable attention. However, scanty work has been done in Egypt in terms of applicability of pulp/tooth ratio for age estimation. Aim of the Work: The present work was designed to assess the Cameriere’s method for age estimation from pulp/tooth ratio of maxillary canines, central and lateral incisors among a sample from Egyptian population. In addition, to formulate regression equations to be used as population-based standards for age determination. Material and Methods: The present study was conducted on 270 peri-apical X-rays of maxillary canines, central and lateral incisors (collected from 131 males and 139 females aged between 19 and 52 years). The pulp and tooth areas were measured using the Adobe Photoshop software program and the pulp/tooth area ratio was computed. Linear regression equations were determined separately for canines, central and lateral incisors. Results: A significant correlation was recorded between the pulp/tooth area ratio and the chronological age. The linear regression analysis revealed a coefficient of determination (R² = 0.824 for canine, 0.588 for central incisor and 0.737 for lateral incisor teeth). Three regression equations were derived. Conclusion: As a conclusion, the pulp/tooth ratio is a useful technique for estimating age among Egyptians. Additionally, the regression equation derived from canines gave better result than the incisors.

Keywords: age determination, canines, central incisors, Egypt, lateral incisors, pulp/tooth ratio

Procedia PDF Downloads 181
1231 The Markers -mm and dämmo in Amharic: Developmental Approach

Authors: Hayat Omar

Abstract:

Languages provide speakers with a wide range of linguistic units to organize and deliver information. There are several ways to verbally express the mental representations of events. According to the linguistic tools they have acquired, speakers select the one that brings out the most communicative effect to convey their message. Our study focuses on two markers, -mm and dämmo, in Amharic (Ethiopian Semitic language). Our aim is to examine, from a developmental perspective, how they are used by speakers. We seek to distinguish the communicative and pragmatic functions indicated by means of these markers. To do so, we created a corpus of sixty narrative productions of children from 5-6, 7-8 to 10-12 years old and adult Amharic speakers. The experimental material we used to collect our data is a series of pictures without text 'Frog, Where are you?'. Although -mm and dämmo are each used in specific contexts, they are sometimes analyzed as being interchangeable. The suffix -mm is complex and multifunctional. It marks the end of the negative verbal structure, it is found in the relative structure of the imperfect, it creates new words such as adverbials or pronouns, it also serves to coordinate words, sentences and to mark the link between macro-propositions within a larger textual unit. -mm was analyzed as marker of insistence, topic shift marker, element of concatenation, contrastive focus marker, 'bisyndetic' coordinator. On the other hand, dämmo has limited function and did not attract the attention of many authors. The only approach we could find analyzes it in terms of 'monosyndetic' coordinator. The paralleling of these two elements made it possible to understand their distinctive functions and refine their description. When it comes to marking a referent, the choice of -mm or dämmo is not neutral, depending on whether the tagged argument is newly introduced, maintained, promoted or reintroduced. The presence of these morphemes explains the inter-phrastic link. The information is seized by anaphora or presupposition: -mm goes upstream while dämmo arrows downstream, the latter requires new information. The speaker uses -mm or dämmo according to what he assumes to be known to his interlocutors. The results show that -mm and dämmo, although all the speakers use them both, do not always have the same scope according to the speaker and vary according to the age. dämmo is mainly used to mark a contrastive topic to signal the concomitance of events. It is more commonly used in young children’s narratives (F(3,56) = 3,82, p < .01). Some values of -mm (additive) are acquired very early while others are rather late and increase with age (F(3,56) = 3,2, p < .03). The difficulty is due not only because of its synthetic structure but primarily because it is multi-purpose and requires a memory work. It highlights the constituent on which it operates to clarify how the message should be interpreted.

Keywords: acquisition, cohesion, connection, contrastive topic, contrastive focus, discourse marker, pragmatics

Procedia PDF Downloads 132
1230 Recent Volatility in Islamic Banking Sector of Bangladesh: Nexus Between Economy, Religion and Politics

Authors: Abdul Kader

Abstract:

This paper attempts to investigate several contributory factors to recent volatility in the Islamic Banking sector of Bangladesh. In particular, the study explores corporate governance, credit management, credit regulations, inept board of directors, using religious sentiment as a means to deceive general people, and the degree of political interference as potential contributory factors. To find the correlation among different variables, semi-structured questionnaires were distributed among the clients, bank managers, some Banking scholars and ex-members of the board of directors of three Islamic Banks in Bangladesh. Later, ten interviews were collected from key informants to gain in-depth information about the present mismanagement of Islamic Banks in Bangladesh. After then, data were analyzed using statistical software and substantiated by secondary sources like newspapers, reports and investigative reports aired in screen media. The paper found a correlation between almost all contributory factors and recent unstable conditions in the Islamic banking sector. After performing regression analysis, this paper found a more significant relationship between some of the contributory factors with Banking volatility than others. For instance, credit management, inept board of directors, depriving customers of proving no profit in the name of business—no interest-- and political interference have a strong significant positive correlation with the present poor condition of Islamic Banking. This paper concludes that while internal management is important in recovering the losses, the government needs to ensure framing better policy for the Islamic Banking system, Central Bank needs to supervise and monitor all Islamic banks meticulously and loan receivers must go through the impartial evaluation and approved by the representatives of the Central Shariah Board. This paper also recommends that there is a need to strengthen the auditing system and improve regulatory oversight of the Islamic Banks in Bangladesh. Policy recommendations that this paper put forward could provide an outline for dealing with the existing challenging condition of Islamic Banks and these could be applied to similar problems in other countries where the Islamic Banking model exists.

Keywords: Islamic bank, volatility in banking sector, shariah law, credit management, political interference

Procedia PDF Downloads 75
1229 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials

Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov

Abstract:

Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.

Keywords: reading, commercials, eye movements, EEG, polygraphic indicators

Procedia PDF Downloads 161
1228 Reconstruction Paleogeomorphological Map of the Nile River in Upper Egypt by Using Some Geomorphological and Geoarchaeological Indicators

Authors: Magdy Torab

Abstract:

Ancient Egyptians built their temples purposefully close to the River Nile to use it for transporting construction stones from far away quarries to building sites in river-boats. Most temples, therefore, have river-harbors associated with their geometric designs. The paleoriver channel remapped by using this idea, besides other geomorphological and geoarchaeological indicators/evidence located between Aswan and Luxor cities. In this sense, this paper defines the characteristics of this ancient course and its associated landforms using paleochannel morphology, paleomeandering, and ancient river dynamics during historic and prehistoric times. Both geomorphological and geoarchaeological approaches used to reconstruct the paleomorphology of the river course. It helps to investigate the ancient river morphology by using the following techniques: comparison and interpretation of multi dates satellite images and historical maps between 1943 and 2004. The results illustrated on maps using GIS (ARC GIS V.10 software) and the field data collected from the western bank of The Nile River at Luxor area and Karnak, Edfu, Esna and Kom Ombo temples. Created both current and paleogeomorphological maps depending upon the results of geoarchaeological surveying and soil analysis and dating, for surface and subsurface soil sampling by handle auger, laser diffraction analysis for 7 soil samples collected from some mounds and Malkata channel in the western bank of The Nile River near Luxor. Paleo-current directions were determined by using standard Brunton compass to use it as an indicator is evidence for the direction of flow of The Nile River during deposition of some accumulated mounds on the western part of the floodplain near Luxor city. C-14 dating was used for two samples collected from these mounds as well as geographical information system (GIS) technique for mapping. The geomorphological and geoarchaeological evidence shows that the Nile River course in Luxor area was around 4.5 km wide and contained many islands and sandbars which separated inside the river channel, now appearing as scattered mounds inside the floodplain. Upper Egypt has migrated during the historic times to the east up to five kilometers and become far away from the ancient temples, quarries, and harbors. It has also become as well as become more meandering and narrower than before.

Keywords: Nile River, ancient harbours, Luxor, paleogeomorphology, geoarchaeology

Procedia PDF Downloads 148
1227 Ischemic Stroke Detection in Computed Tomography Examinations

Authors: Allan F. F. Alves, Fernando A. Bacchim Neto, Guilherme Giacomini, Marcela de Oliveira, Ana L. M. Pavan, Maria E. D. Rosa, Diana R. Pina

Abstract:

Stroke is a worldwide concern, only in Brazil it accounts for 10% of all registered deaths. There are 2 stroke types, ischemic (87%) and hemorrhagic (13%). Early diagnosis is essential to avoid irreversible cerebral damage. Non-enhanced computed tomography (NECT) is one of the main diagnostic techniques used due to its wide availability and rapid diagnosis. Detection depends on the size and severity of lesions and the time spent between the first symptoms and examination. The Alberta Stroke Program Early CT Score (ASPECTS) is a subjective method that increases the detection rate. The aim of this work was to implement an image segmentation system to enhance ischemic stroke and to quantify the area of ischemic and hemorrhagic stroke lesions in CT scans. We evaluated 10 patients with NECT examinations diagnosed with ischemic stroke. Analyzes were performed in two axial slices, one at the level of the thalamus and basal ganglion and one adjacent to the top edge of the ganglionic structures with window width between 80 and 100 Hounsfield Units. We used different image processing techniques such as morphological filters, discrete wavelet transform and Fuzzy C-means clustering. Subjective analyzes were performed by a neuroradiologist according to the ASPECTS scale to quantify ischemic areas in the middle cerebral artery region. These subjective analysis results were compared with objective analyzes performed by the computational algorithm. Preliminary results indicate that the morphological filters actually improve the ischemic areas for subjective evaluations. The comparison in area of the ischemic region contoured by the neuroradiologist and the defined area by computational algorithm showed no deviations greater than 12% in any of the 10 examination tests. Although there is a tendency that the areas contoured by the neuroradiologist are smaller than those obtained by the algorithm. These results show the importance of a computer aided diagnosis software to assist neuroradiology decisions, especially in critical situations as the choice of treatment for ischemic stroke.

Keywords: ischemic stroke, image processing, CT scans, Fuzzy C-means

Procedia PDF Downloads 365
1226 Effectiveness of Traditional Chinese Medicine in the Treatment of Eczema: A Systematic Review and Meta-Analysis Based on Eczema Area and Severity Index Score

Authors: Oliver Chunho Ma, Tszying Chang

Abstract:

Background: Traditional Chinese Medicine (TCM) has been widely used in the treatment of eczema. However, there is currently a lack of comprehensive research on the overall effectiveness of TCM in treating eczema, particularly using the Eczema Area and Severity Index (EASI) score as an evaluation tool. Meta-analysis can integrate the results of multiple studies to provide more convincing evidence. Objective: To conduct a systematic review and meta-analysis based on the EASI score to evaluate the overall effectiveness of TCM in the treatment of eczema. Specifically, the study will review and analyze published clinical studies that investigate TCM treatments for eczema and use the EASI score as an outcome measure, comparing the differences in improving the severity of eczema between TCM and other treatment modalities, such as conventional Western medicine treatments. Methods: Relevant studies, including randomized controlled trials (RCTs) and non-randomized controlled trials, that involve TCM treatment for eczema and use the EASI score as an outcome measure will be searched in medical literature databases such as PubMed, CNKI, etc. Relevant data will be extracted from the selected studies, including study design, sample size, treatment methods, improvement in EASI score, etc. The methodological quality and risk of bias of the included studies will be assessed using appropriate evaluation tools (such as the Cochrane Handbook). The results of the selected studies will be statistically analyzed, including pooling effect sizes (such as standardized mean differences, relative risks, etc.), subgroup analysis (e.g., different TCM syndromes, different treatment modalities), and sensitivity analysis (e.g., excluding low-quality studies). Based on the results of the statistical analysis and quality assessment, the overall effectiveness of TCM in improving the severity of eczema will be interpreted. Expected outcomes: By integrating the results of multiple studies, we expect to provide more convincing evidence regarding the specific effects of TCM in improving the severity of eczema. Additionally, subgroup analysis and sensitivity analysis can further elucidate whether the effectiveness of TCM treatment is influenced by different factors. Besides, we will compare the results of the meta-analysis with the clinical data from our clinic. For both the clinical data and the meta-analysis results, we will perform descriptive statistics such as means, standard deviations, percentages, etc. and compare the differences between the two using statistical tests such as independent samples t-test or non-parametric tests to assess the statistical differences between them.

Keywords: Eczema, traditional Chinese medicine, EASI, systematic review, meta-analysis

Procedia PDF Downloads 51
1225 Prognosis of Patients with COVID-19 and Hematologic Malignancies

Authors: Elizabeth Behrens, Anne Timmermann, Alexander Yerkan, Joshua Thomas, Deborah Katz, Agne Paner, Melissa Larson, Shivi Jain, Seo-Hyun Kim, Celalettin Ustun, Ankur Varma, Parameswaran Venugopal, Jamile Shammo

Abstract:

Coronavirus Disease-2019 (COVID-19) causes persistent concern for poor outcomes in vulnerable populations. Patients with hematologic malignancies (HM) have been found to have higher COVID-19 case fatality rates compared to those without malignancy. While cytopenias are common in patients with HM, especially in those undergoing chemotherapy treatment, hemoglobin (Hgb) and platelet count have not yet been studied, to our best knowledge, as potential prognostic indicators for patients with HM and COVID-19. The goal of this study is to identify factors that may increase the risk of mortality in patients with HM and COVID-19. In this single-center, retrospective, observational study, 65 patients with HM and laboratory confirmed COVID-19 were identified between March 2020 and January 2021. Information on demographics, laboratory data the day of COVID-19 diagnosis, and prognosis was extracted from the electronic medical record (EMR), chart reviewed, and analyzed using the statistical software SAS version 9.4. Chi-square testing was used for categorical variable analyses. Risk factors associated with mortality were established by logistic regression models. Non-Hodgkin lymphoma (37%), chronic lymphocytic leukemia (20%), and plasma cell dyscrasia (15%) were the most common HM. Higher Hgb level upon COVID-19 diagnosis was related to decreased mortality, odd ratio=0.704 (95% confidence interval [CI]: 0.511-0.969; P = .0263). Platelet count the day of COVID-19 diagnosis was lower in patients who ultimately died (mean 127 ± 72K/uL, n=10) compared to patients who survived (mean 197 ±92K/uL, n=55) (P=.0258). Female sex was related to decreased mortality, odd ratio=0.143 (95% confidence interval [CI]: 0.026-0.785; P = .0353). There was no mortality difference between the patients who were on treatment for HM the day of COVID-19 diagnosis compared to those who were not (P=1.000). Lower Hgb and male sex are independent risk factors associated with increased mortality of HM patients with COVID-19. Clinicians should be especially attentive to patients with HM and COVID-19 who present with cytopenias. Larger multi-center studies are urgently needed to further investigate the impact of anemia, thrombocytopenia, and demographics on outcomes of patients with hematologic malignancies diagnosed with COVID-19.

Keywords: anemia, COVID-19, hematologic malignancy, prognosis

Procedia PDF Downloads 145
1224 The Role Played by Awareness and Complexity through the Use of a Logistic Regression Analysis

Authors: Yari Vecchio, Margherita Masi, Jorgelina Di Pasquale

Abstract:

Adoption of Precision Agriculture (PA) is involved in a multidimensional and complex scenario. The process of adopting innovations is complex and social inherently, influenced by other producers, change agents, social norms and organizational pressure. Complexity depends on factors that interact and influence the decision to adopt. Farm and operator characteristics, as well as organizational, informational and agro-ecological context directly affect adoption. This influence has been studied to measure drivers and to clarify 'bottlenecks' of the adoption of agricultural innovation. Making decision process involves a multistage procedure, in which individual passes from first hearing about the technology to final adoption. Awareness is the initial stage and represents the moment in which an individual learns about the existence of the technology. 'Static' concept of adoption has been overcome. Awareness is a precondition to adoption. This condition leads to not encountering some erroneous evaluations, arose from having carried out analysis on a population that is only in part aware of technologies. In support of this, the present study puts forward an empirical analysis among Italian farmers, considering awareness as a prerequisite for adoption. The purpose of the present work is to analyze both factors that affect the probability to adopt and determinants that drive an aware individual to not adopt. Data were collected through a questionnaire submitted in November 2017. A preliminary descriptive analysis has shown that high levels of adoption have been found among younger farmers, better educated, with high intensity of information, with large farm size and high labor-intensive, and whose perception of the complexity of adoption process is lower. The use of a logit model permits to appreciate the weight played by the intensity of labor and complexity perceived by the potential adopter in PA adoption process. All these findings suggest important policy implications: measures dedicated to promoting innovation will need to be more specific for each phase of this adoption process. Specifically, they should increase awareness of PA tools and foster dissemination of information to reduce the degree of perceived complexity of the adoption process. These implications are particularly important in Europe where is pre-announced the reform of Common Agricultural Policy, oriented to innovation. In this context, these implications suggest to the measures supporting innovation to consider the relationship between various organizational and structural dimensions of European agriculture and innovation approaches.

Keywords: adoption, awareness, complexity, precision agriculture

Procedia PDF Downloads 136
1223 The Residual Efficacy of Etofenprox WP on Different Surfaces for Malaria Control in the Brazilian Legal Amazon

Authors: Ana Paula S. A. Correa, Allan K. R. Galardo, Luana A. Lima, Talita F. Sobral, Josiane N. Muller, Jessica F. S. Barroso, Nercy V. R. Furtado, Ednaldo C. Rêgo., Jose B. P. Lima

Abstract:

Malaria is a public health problem in the Brazilian Legal Amazon. Among the integrated approaches for anopheline control, the Indoor Residual Spraying (IRS) remains one of the main tools in the basic strategy applied in the Amazonian States, where the National Malaria Control Program currently uses one of the insecticides from the pyrethroid class, the Etofenprox WP. Understanding the residual efficacy of insecticides on different surfaces is essential to determine the spray cycles, in order to maintain a rational use and to avoid product waste. The aim of this study was to evaluate the residual efficacy of Etofenprox - VECTRON ® 20 WP on surfaces of Unplastered Cement (UC) and Unpainted Wood (UW) on panels, in field, and in semi-field evaluation of Brazil’s Amapa State. The evaluation criteria used was the cone bioassay test, following the World Health Organization (WHO) recommended method, using plastic cones and female mosquitos of Anopheles sp. The tests were carried out in laboratory panels, semi-field evaluation in a “test house” built in the Macapa municipality, and in the field in 20 houses, being ten houses per surface type (UC and UW), in an endemic malaria area in Mazagão’s municipality. The residual efficacy was measured from March to September 2017, starting one day after the spraying, repeated monthly for a period of six months. The UW surface presented higher residual efficacy than the UC. In fact, the UW presented a residual efficacy of the insecticide throughout the period of this study with a mortality rate above 80% in the panels (= 95%), in the "test house" (= 86%) and in field houses ( = 87%). On the UC surface it was observed a mortality decreased in all the tests performed, with a mortality rate of 45, 47 and 29% on panels, semi-field and in field, respectively; however, the residual efficacy ≥ 80% only occurred in the first evaluation after the 24-hour spraying bioassay in the "test house". Thus, only the UW surface meets the specifications of the World Health Organization Pesticide Evaluation Scheme (WHOPES) regarding the duration of effective action (three to six months). To sum up, the insecticide residual efficacy presented variability on the different surfaces where it was sprayed. Although the IRS with Etofenprox WP was efficient on UW surfaces, and it can be used in spraying cycles at 4-month intervals, it is important to consider the diversity of houses in the Brazilian Legal Amazon, in order to implement alternatives for vector control, including the evaluation of new products or different formulations types for insecticides.

Keywords: Anopheles, vector control, insecticide, bioassay

Procedia PDF Downloads 161
1222 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance

Authors: Clement Yeboah, Eva Laryea

Abstract:

A pretest-posttest within subjects experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant, indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant, indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop an interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers and will continue to be a dynamic and rapidly evolving field for years to come.

Keywords: pretest-posttest within subjects, computer game-based learning, statistics achievement, statistics anxiety

Procedia PDF Downloads 73
1221 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization

Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford

Abstract:

The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.

Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator

Procedia PDF Downloads 128
1220 Leptospira Lipl32-Specific Antibodies: Therapeutic Property, Epitopes Characterization and Molecular Mechanisms of Neutralization

Authors: Santi Maneewatchararangsri, Wanpen Chaicumpa, Patcharin Saengjaruk, Urai Chaisri

Abstract:

Leptospirosis is a globally neglected disease that continues to be a significant public health and veterinary burden, with millions of cases reported each year. Early and accurate differential diagnosis of leptospirosis from other febrile illnesses and the development of a broad spectrum of leptospirosis vaccines are needed. The LipL32 outer membrane lipoprotein is a member of Leptospira adhesive matrices and has been found to exert hemolytic activity to erythrocytes in vitro. Therefore, LipL32 is regarded as a potential target for diagnosis, broad-spectrum leptospirosis vaccines, and for passive immunotherapy. In this study, we established LipL32-specific mouse monoclonal antibodies, mAbLPF1 and mAbLPF2, and their respective mouse- and humanized-engineered single chain variable fragment (ScFv). Their antibodies’ neutralizing activities against Leptospira-mediated hemolysis in vitro, and the therapeutic efficacy of mAbs against heterologous Leptospira infected hamsters were demonstrated. The epitope peptide of mAb LPF1 was mapped to a non-contiguous carboxy-terminal β-turn and amphipathic α-helix of LipL32 structure contributing to phospholipid/host cell adhesion and membrane insertion. We found that the mAbLPF2 epitope was located on the interacting loop of peptide binding groove of the LipL32 molecule responsible for interactions with host constituents. Epitope sequences are highly conserved among Leptospira spp. and are absent from the LipL32 superfamily of other microorganisms. Both epitopes are surface-exposed, readily accessible by mAbs, and immunogenic. However, they are less dominant when revealed by LipL32-specific immunoglobulins from leptospirosis-patient sera and rabbit hyperimmune serum raised by whole Leptospira. Our study also demonstrated an adhesion inhibitory activity of LipL32 protein to host membrane components and cells mediated by mAbs as well as an anti-hemolytic activity of the respective antibodies. The therapeutic antibodies, particularly the humanized-ScFv, have a potential for further development as non-drug therapeutic agent for human leptospirosis, especially in subjects allergic to antibiotics. The epitope peptides recognized by two therapeutic mAbs have potential use as tools for structure-function studies. Finally, protective peptides may be used as a target for epitope-based vaccines for control of leptospirosis.

Keywords: leptospira lipl32-specific antibodies, therapeutic epitopes, epitopes characterization, immunotherapy

Procedia PDF Downloads 295
1219 Exploring the History of Chinese Music Acoustic Technology through Data Fluctuations

Authors: Yang Yang, Lu Xin

Abstract:

The study of extant musical sites can provide a side-by-side picture of historical ethnomusicological information. In their data collection on Chinese opera houses, researchers found that one Ming Dynasty opera house reached a width of nearly 18 meters, while all opera houses of the same period and after it was far from such a width, being significantly smaller than 18 meters. The historical transient fluctuations in the data dimension of width that caused Chinese theatres to fluctuate in the absence of construction scale constraints have piqued the interest of researchers as to why there is data variation in width. What factors have contributed to the lack of further expansion in the width of theatres? To address this question, this study used a comparative approach to conduct a venue experiment between this theater stage and another theater stage for non-heritage opera performances, collecting the subjective perceptions of performers and audiences at different theater stages, as well as combining BK Connect platform software to measure data such as echo and delay. From the subjective and objective results, it is inferred that the Chinese ancients discovered and understood the acoustical phenomenon of the Haas effect by exploring the effect of stage width on musical performance and appreciation of listening states during the Ming Dynasty and utilized this discovery to serve music in subsequent stage construction. This discovery marked a node of evolution in Chinese architectural acoustics technology driven by musical demands. It is also instructive to note that, in contrast to many of the world's "unsuccessful civilizations," China can use a combination of heritage and intangible cultural research to chart a clear, demand-driven course for the evolution of human music technology, and that the findings of such research will complete the course of human exploration of music acoustics. The findings of such research will complete the journey of human exploration of music acoustics, and this practical experience can be applied to the exploration and understanding of other musical heritage base data.

Keywords: Haas effect, musical acoustics, history of acoustical technology, Chinese opera stage, structure

Procedia PDF Downloads 180
1218 Non-Perturbative Vacuum Polarization Effects in One- and Two-Dimensional Supercritical Dirac-Coulomb System

Authors: Andrey Davydov, Konstantin Sveshnikov, Yulia Voronina

Abstract:

There is now a lot of interest to the non-perturbative QED-effects, caused by diving of discrete levels into the negative continuum in the supercritical static or adiabatically slowly varying Coulomb fields, that are created by the localized extended sources with Z > Z_cr. Such effects have attracted a considerable amount of theoretical and experimental activity, since in 3+1 QED for Z > Z_cr,1 ≈ 170 a non-perturbative reconstruction of the vacuum state is predicted, which should be accompanied by a number of nontrivial effects, including the vacuum positron emission. Similar in essence effects should be expected also in both 2+1 D (planar graphene-based hetero-structures) and 1+1 D (one-dimensional ‘hydrogen ion’). This report is devoted to the study of such essentially non-perturbative vacuum effects for the supercritical Dirac-Coulomb systems in 1+1D and 2+1D, with the main attention drawn to the vacuum polarization energy. Although the most of works considers the vacuum charge density as the main polarization observable, vacuum energy turns out to be not less informative and in many respects complementary to the vacuum density. Moreover, the main non-perturbative effects, which appear in vacuum polarization for supercritical fields due to the levels diving into the lower continuum, show up in the behavior of vacuum energy even more clear, demonstrating explicitly their possible role in the supercritical region. Both in 1+1D and 2+1D, we explore firstly the renormalized vacuum density in the supercritical region using the Wichmann-Kroll method. Thereafter, taking into account the results for the vacuum density, we formulate the renormalization procedure for the vacuum energy. To evaluate the latter explicitly, an original technique, based on a special combination of analytical methods, computer algebra tools and numerical calculations, is applied. It is shown that, for a wide range of the external source parameters (the charge Z and size R), in the supercritical region the renormalized vacuum energy could significantly deviate from the perturbative quadratic growth up to pronouncedly decreasing behavior with jumps by (-2 x mc^2), which occur each time, when the next discrete level dives into the negative continuum. In the considered range of variation of Z and R, the vacuum energy behaves like ~ -Z^2/R in 1+1D and ~ -Z^3/R in 2+1D, exceeding deeply negative values. Such behavior confirms the assumption of the neutral vacuum transmutation into the charged one, and thereby of the spontaneous positron emission, accompanying the emergence of the next vacuum shell due to the total charge conservation. To the end, we also note that the methods, developed for the vacuum energy evaluation in 2+1 D, with minimal complements could be carried over to the three-dimensional case, where the vacuum energy is expected to be ~ -Z^4/R and so could be competitive with the classical electrostatic energy of the Coulomb source.

Keywords: non-perturbative QED-effects, one- and two-dimensional Dirac-Coulomb systems, supercritical fields, vacuum polarization

Procedia PDF Downloads 198
1217 The Relationship between Mobile Phone Usage and Secondary School Students’ Academic Performance: Work Experience at an International School

Authors: L. N. P. Wedikandage, Mohamed Razmi Zahir

Abstract:

Technology is a global imperative because of its contributions to human existence and because it has improved global socioeconomic relations. As a result, the mobile phone has become the most important mode of communication today. Smartphones, Internet-enabled devices with built-in computer software and applications, are one of the most significant inventions of the twenty-first century. Technology is advantageous to many people, especially those involved in education. It is an important learning tool for today's schoolchildren. It enables students to access online learning platforms and course resources and interact digitally. Senior secondary students, in particular, have some of the most expensive and sophisticated mobile phones, tablets, and iPads capable of connecting to the internet and various social media platforms, other websites, and so on. At present, the use of mobile phones' potential for effective teaching and learning is growing. This is due to the benefits of mobile learning, including the ability to share knowledge without any limits in space or Time and the capacity to facilitate the development of critical thinking, participatory learning, problem-solving, and the development of lifelong communication skills. However, it is yet unclear how mobile devices may affect education and how they may affect opportunities for learning. As a result, the purpose of this research was to ascertain the relationship between mobile phone usage and the academic Performance of secondary-level students at an international school in Sri Lanka. The study's sample consisted of 523 secondary-level students from an international school, ranging from Form 1 to Upper 6. For the study, a survey research design and questionnaires were used. Google Forms was used to create the students' survey. There were three hypotheses tested to find out the relationship between mobile phone usage and academic preference. The findings show that there is a positive relationship between mobile phone usage and academic performance among secondary school students (the number of students obtaining simple passes is significantly higher when mobile phones are being used for more than 7 hours), no relationship between mobile phone usage and academic performance among secondary school students of different parents' occupations, and a relationship between the frequency of mobile phone usage and academic performance among secondary school students.

Keywords: mobile phone, academic performance, secondary level, international schools

Procedia PDF Downloads 79
1216 Building Information Modeling-Based Information Exchange to Support Facilities Management Systems

Authors: Sandra T. Matarneh, Mark Danso-Amoako, Salam Al-Bizri, Mark Gaterell

Abstract:

Today’s facilities are ever more sophisticated and the need for available and reliable information for operation and maintenance activities is vital. The key challenge for facilities managers is to have real-time accurate and complete information to perform their day-to-day activities and to provide their senior management with accurate information for decision-making process. Currently, there are various technology platforms, data repositories, or database systems such as Computer-Aided Facility Management (CAFM) that are used for these purposes in different facilities. In most current practices, the data is extracted from paper construction documents and is re-entered manually in one of these computerized information systems. Construction Operations Building information exchange (COBie), is a non-proprietary data format that contains the asset non-geometric data which was captured and collected during the design and construction phases for owners and facility managers use. Recently software vendors developed add-in applications to generate COBie spreadsheet automatically. However, most of these add-in applications are capable of generating a limited amount of COBie data, in which considerable time is still required to enter the remaining data manually to complete the COBie spreadsheet. Some of the data which cannot be generated by these COBie add-ins is essential for facilities manager’s day-to-day activities such as job sheet which includes preventive maintenance schedules. To facilitate a seamless data transfer between BIM models and facilities management systems, we developed a framework that enables automated data generation using the data extracted directly from BIM models to external web database, and then enabling different stakeholders to access to the external web database to enter the required asset data directly to generate a rich COBie spreadsheet that contains most of the required asset data for efficient facilities management operations. The proposed framework is a part of ongoing research and will be demonstrated and validated on a typical university building. Moreover, the proposed framework supplements the existing body of knowledge in facilities management domain by providing a novel framework that facilitates seamless data transfer between BIM models and facilities management systems.

Keywords: building information modeling, BIM, facilities management systems, interoperability, information management

Procedia PDF Downloads 111
1215 The Future of the Architect's Profession in France with the Emergence of Building Information Modelling

Authors: L. Mercier, D. Beladjine, K. Beddiar

Abstract:

The digital transition of building in France brings many changes which some have been able to face very quickly, while others are struggling to find their place and the interest that BIM can bring in their profession. BIM today is already adopted or initiated by construction professionals. However, this change, which can be drastic for some, prevents them from integrating it definitively. This is the case with architects. The profession is shared on the practice of BIM in its exercise. The risk of not adopting this new working method now and of not wanting to switch to its new digital tools leads us to question the future of the profession in view of the gap that is likely to be created within project management. In order to deal with the subject efficiently, our work was based on a documentary watch on BIM and then on the profession of architect, which allowed us to establish links on these two subjects. The observation of the economic model towards which the agencies tend and the trend of the sought after profiles made it possible to develop the opportunities and the brakes likely to impact the future of the profession of architect. The centralization of research directs work towards the conclusion that the model implemented by companies does not allow to integrate BIM within their structure. A solution hypothesis was then issued, focusing on the development of agencies through the diversity of profiles, skills to be integrated internally with the aim of diversifying their skills, and their business practices. In order to address this hypothesis of a multidisciplinary agency model, we conducted a survey of architectural firms. It is built on the model of Anglo-Saxon countries, which do not have the same functioning in comparison to the French model. The results obtained showed a risk of gradual disappearance on the market from small agencies in favor of those who will have and could take this BIM working method. This is why the architectural profession must, first of all, look at what is happening within its training before absolutely wanting to diversify the profiles to integrate into its structure. This directs the study on the training of architects. The schools of French architects are generally behind schedule if we allow the comparison to the schools of engineers. The latter is currently experiencing a slight improvement with the emergence of masters and BIM options during the university course. If the training of architects develops towards learning BIM and the agencies have the desire to integrate different but complementary profiles, then they will develop their skills internally and therefore open their profession to new functions. The place of BIM Management on projects will allow the architect to remain in control of the project because of their overall vision of the project. In addition, the integration of BIM and more generally of the life cycle analysis of the structure will make it possible to guarantee eco-design or eco-construction by approaching the constraints of sustainable development omnipresent on the planet.

Keywords: building information modelling, BIM, BIM management, BIM manager, BIM architect

Procedia PDF Downloads 110
1214 Crack Growth Life Prediction of a Fighter Aircraft Wing Splice Joint Under Spectrum Loading Using Random Forest Regression and Artificial Neural Networks with Hyperparameter Optimization

Authors: Zafer Yüce, Paşa Yayla, Alev Taşkın

Abstract:

There are heaps of analytical methods to estimate the crack growth life of a component. Soft computing methods have an increasing trend in predicting fatigue life. Their ability to build complex relationships and capability to handle huge amounts of data are motivating researchers and industry professionals to employ them for challenging problems. This study focuses on soft computing methods, especially random forest regressors and artificial neural networks with hyperparameter optimization algorithms such as grid search and random grid search, to estimate the crack growth life of an aircraft wing splice joint under variable amplitude loading. TensorFlow and Scikit-learn libraries of Python are used to build the machine learning models for this study. The material considered in this work is 7050-T7451 aluminum, which is commonly preferred as a structural element in the aerospace industry, and regarding the crack type; corner crack is used. A finite element model is built for the joint to calculate fastener loads and stresses on the structure. Since finite element model results are validated with analytical calculations, findings of the finite element model are fed to AFGROW software to calculate analytical crack growth lives. Based on Fighter Aircraft Loading Standard for Fatigue (FALSTAFF), 90 unique fatigue loading spectra are developed for various load levels, and then, these spectrums are utilized as inputs to the artificial neural network and random forest regression models for predicting crack growth life. Finally, the crack growth life predictions of the machine learning models are compared with analytical calculations. According to the findings, a good correlation is observed between analytical and predicted crack growth lives.

Keywords: aircraft, fatigue, joint, life, optimization, prediction.

Procedia PDF Downloads 170
1213 Study the Difference Between the Mohr-Coulomb and the Barton-Bandis Joint Constitutive Models: A Case Study from the Iron Open Pit Mine, Canada

Authors: Abbas Kamalibandpey, Alain Beland, Joseph Mukendi Kabuya

Abstract:

Since a rock mass is a discontinuum medium, its behaviour is governed by discontinuities such as faults, joint sets, lithologic contact, and bedding planes. Thus, rock slope stability analysis in jointed rock masses is largely dependent upon discontinuities constitutive equations. This paper studies the difference between the Mohr-Coulomb (MC) and the Barton-Bandis (BB) joint constitutive numerical models for lithological contacts and joint sets. For the rock in these models, generalized Hoek-Brown criteria have been considered. The joint roughness coefficient (JRC) and the joint wall compressive strength (JCS) are vital parameters in the BB model. The numerical models are applied to the rock slope stability analysis in the Mont-Wright (MW) mine. The Mont-Wright mine is owned and operated by ArcelorMittal Mining Canada (AMMC), one of the largest iron-ore open pit operations in Canada. In this regard, one of the high walls of the mine has been selected to undergo slope stability analysis with RS2D software, finite element method. Three piezometers have been installed in this zone to record pore water pressure and it is monitored by radar. In this zone, the AMP-IF and QRMS-IF contacts and very persistent and altered joint sets in IF control the rock slope behaviour. The height of the slope is more than 250 m and consists of different lithologies such as AMP, IF, GN, QRMS, and QR. To apply the B-B model, the joint sets and geological contacts have been scanned by Maptek, and their JRC has been calculated by different methods. The numerical studies reveal that the JRC of geological contacts, AMP-IF and QRMS-IF, and joint sets in IF had a significant influence on the safety factor. After evaluating the results of rock slope stability analysis and the radar data, the B-B constitutive equation for discontinuities has shown acceptable results to the real condition in the mine. It should be noted that the difference in safety factors in MC and BB joint constitutive models in some cases is more than 30%.

Keywords: barton-Bandis criterion, Hoek-brown and Mohr-Coulomb criteria, open pit, slope stability

Procedia PDF Downloads 99
1212 Computational Study on Traumatic Brain Injury Using Magnetic Resonance Imaging-Based 3D Viscoelastic Model

Authors: Tanu Khanuja, Harikrishnan N. Unni

Abstract:

Head is the most vulnerable part of human body and may cause severe life threatening injuries. As the in vivo brain response cannot be recorded during injury, computational investigation of the head model could be really helpful to understand the injury mechanism. Majority of the physical damage to living tissues are caused by relative motion within the tissue due to tensile and shearing structural failures. The present Finite Element study focuses on investigating intracranial pressure and stress/strain distributions resulting from impact loads on various sites of human head. This is performed by the development of the 3D model of a human head with major segments like cerebrum, cerebellum, brain stem, CSF (cerebrospinal fluid), and skull from patient specific MRI (magnetic resonance imaging). The semi-automatic segmentation of head is performed using AMIRA software to extract finer grooves of the brain. To maintain the accuracy high number of mesh elements are required followed by high computational time. Therefore, the mesh optimization has also been performed using tetrahedral elements. In addition, model validation with experimental literature is performed as well. Hard tissues like skull is modeled as elastic whereas soft tissues like brain is modeled with viscoelastic prony series material model. This paper intends to obtain insights into the severity of brain injury by analyzing impacts on frontal, top, back, and temporal sites of the head. Yield stress (based on von Mises stress criterion for tissues) and intracranial pressure distribution due to impact on different sites (frontal, parietal, etc.) are compared and the extent of damage to cerebral tissues is discussed in detail. This paper finds that how the back impact is more injurious to overall head than the other. The present work would be helpful to understand the injury mechanism of traumatic brain injury more effectively.

Keywords: dynamic impact analysis, finite element analysis, intracranial pressure, MRI, traumatic brain injury, von Misses stress

Procedia PDF Downloads 158