Search results for: computational complexity theory
582 Internal Financing Constraints and Corporate Investment: Evidence from Indian Manufacturing Firms
Authors: Gaurav Gupta, Jitendra Mahakud
Abstract:
This study focuses on the significance of internal financing constraints on the determination of corporate fixed investments in the case of Indian manufacturing companies. Financing constraints companies which have less internal fund or retained earnings face more transaction and borrowing costs due to imperfections in the capital market. The period of study is 1999-2000 to 2013-2014 and we consider 618 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test, and Hausman test results conclude the suitability of the fixed effect model for the estimation. The cash flow and liquidity of the company have been used as the proxies for the internal financial constraints. In accordance with various theories of corporate investments, we consider other firm specific variable like firm age, firm size, profitability, sales and leverage as the control variables in the model. From the econometric analysis, we find internal cash flow and liquidity have the significant and positive impact on the corporate investments. The variables like cost of capital, sales growth and growth opportunities are found to be significantly determining the corporate investments in India, which is consistent with the neoclassical, accelerator and Tobin’s q theory of corporate investment. To check the robustness of results, we divided the sample on the basis of cash flow and liquidity. Firms having cash flow greater than zero are put under one group, and firms with cash flow less than zero are put under another group. Also, the firms are divided on the basis of liquidity following the same approach. We find that the results are robust to both types of companies having positive and negative cash flow and liquidity. The results for other variables are also in the same line as we find for the whole sample. These findings confirm that internal financing constraints play a significant role for determination of corporate investment in India. The findings of this study have the implications for the corporate managers to focus on the projects having higher expected cash inflows to avoid the financing constraints. Apart from that, they should also maintain adequate liquidity to minimize the external financing costs.Keywords: cash flow, corporate investment, financing constraints, panel data method
Procedia PDF Downloads 243581 Financial Innovations for Companies Offered by Banks: Polish Experience
Authors: Joanna Błach, Anna Doś, Maria Gorczyńska, Monika Wieczorek-Kosmala
Abstract:
Financial innovations can be regarded as the cause and the effect of the evolution of the financial system. Most of financial innovations are created by various financial institutions for their own purposes and needs. However, due to their diversity, financial innovations can be also applied by various business entities (other than financial institutions). This paper focuses on the potential application of financial innovations by non-financial companies. It is assumed that financial innovations may be effectively applied in all fields of corporate financial decisions integrating financial management with the risk management process. Appropriate application of financial innovations may enhance the development of the company and increase its value by improving its financial situation and reducing the level of risk. On the other hand, misused financial innovations may become the source of extra risk for the company threatening its further operation. The main objective of the paper is to identify the major types of financial innovations offered to non-financial companies by the banking system in Poland. It also aims at identifying the main factors determining the creation of financial innovations in the banking system in Poland and indicating future directions of their development. This paper consists of conceptual and empirical part. Conceptual part based on theoretical study is focused on the determinants of the process of financial innovations and their application by the non-financial companies. Theoretical study is followed by the empirical research based on the analysis of the actual offer of the 20 biggest banks operating in Poland with regard to financial innovations offered to SMEs and large corporations. These innovations are classified according to the main functions of the integrated financial management, such as: Financing, investment, working capital management and risk management. Empirical study has proved that the biggest banks operating in the Polish market offer to their business customers many types and classes of financial innovations. This offer appears vast and adequate to the needs and purposes of the Polish non-financial companies. It was observed that financial innovations pertained to financing decisions dominate in the banks’ offer. However, due to high diversification of the offered financial innovations, business customers may effectively apply them in all fields and areas of integrated financial management. It should be underlined, that the banks’ offer is highly dispersed, which may limit the implementation of financial innovations in the corporate finance. It would be also recommended for the banks operating in the Polish market to intensify the education campaign aiming at increasing knowledge about financial innovations among business customers.Keywords: banking products and services, banking sector in Poland, corporate financial management, financial innovations, theory of innovation
Procedia PDF Downloads 303580 Inherent Difficulties in Countering Islamophobia
Authors: Imbesat Daudi
Abstract:
Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam
Procedia PDF Downloads 48579 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.Keywords: artificial intelligence, computer science, criminal investigation, digital forensics
Procedia PDF Downloads 213578 Molecular Modeling and Prediction of the Physicochemical Properties of Polyols in Aqueous Solution
Authors: Maria Fontenele, Claude-Gilles Dussap, Vincent Dumouilla, Baptiste Boit
Abstract:
Roquette Frères is a producer of plant-based ingredients that employs many processes to extract relevant molecules and often transforms them through chemical and physical processes to create desired ingredients with specific functionalities. In this context, Roquette encounters numerous multi-component complex systems in their processes, including fibers, proteins, and carbohydrates, in an aqueous environment. To develop, control, and optimize both new and old processes, Roquette aims to develop new in silico tools. Currently, Roquette uses process modelling tools which include specific thermodynamic models and is willing to develop computational methodologies such as molecular dynamics simulations to gain insights into the complex interactions in such complex media, and especially hydrogen bonding interactions. The issue at hand concerns aqueous mixtures of polyols with high dry matter content. The polyols mannitol and sorbitol molecules are diastereoisomers that have nearly identical chemical structures but very different physicochemical properties: for example, the solubility of sorbitol in water is 2.5 kg/kg of water, while mannitol has a solubility of 0.25 kg/kg of water at 25°C. Therefore, predicting liquid-solid equilibrium properties in this case requires sophisticated solution models that cannot be based solely on chemical group contributions, knowing that for mannitol and sorbitol, the chemical constitutive groups are the same. Recognizing the significance of solvation phenomena in polyols, the GePEB (Chemical Engineering, Applied Thermodynamics, and Biosystems) team at Institut Pascal has developed the COSMO-UCA model, which has the structural advantage of using quantum mechanics tools to predict formation and phase equilibrium properties. In this work, we use molecular dynamics simulations to elucidate the behavior of polyols in aqueous solution. Specifically, we employ simulations to compute essential metrics such as radial distribution functions and hydrogen bond autocorrelation functions. Our findings illuminate a fundamental contrast: sorbitol and mannitol exhibit disparate hydrogen bond lifetimes within aqueous environments. This observation serves as a cornerstone in elucidating the divergent physicochemical properties inherent to each compound, shedding light on the nuanced interplay between their molecular structures and water interactions. We also present a methodology to predict the physicochemical properties of complex solutions, taking as sole input the three-dimensional structure of the molecules in the medium. Finally, by developing knowledge models, we represent some physicochemical properties of aqueous solutions of sorbitol and mannitol.Keywords: COSMO models, hydrogen bond, molecular dynamics, thermodynamics
Procedia PDF Downloads 44577 Global Modeling of Drill String Dragging and Buckling in 3D Curvilinear Bore-Holes
Authors: Valery Gulyayev, Sergey Glazunov, Elena Andrusenko, Nataliya Shlyun
Abstract:
Enhancement of technology and techniques for drilling deep directed oil and gas bore-wells are of essential industrial significance because these wells make it possible to increase their productivity and output. Generally, they are used for drilling in hard and shale formations, that is why their drivage processes are followed by the emergency and failure effects. As is corroborated by practice, the principal drilling drawback occurring in drivage of long curvilinear bore-wells is conditioned by the need to obviate essential force hindrances caused by simultaneous action of the gravity, contact and friction forces. Primarily, these forces depend on the type of the technological regime, drill string stiffness, bore-hole tortuosity and its length. They can lead to the Eulerian buckling of the drill string and its sticking. To predict and exclude these states, special mathematic models and methods of computer simulation should play a dominant role. At the same time, one might note that these mechanical phenomena are very complex and only simplified approaches (‘soft string drag and torque models’) are used for their analysis. Taking into consideration that now the cost of directed wells increases essentially with complication of their geometry and enlargement of their lengths, it can be concluded that the price of mistakes of the drill string behavior simulation through the use of simplified approaches can be very high and so the problem of correct software elaboration is very urgent. This paper deals with the problem of simulating the regimes of drilling deep curvilinear bore-wells with prescribed imperfect geometrical trajectories of their axial lines. On the basis of the theory of curvilinear flexible elastic rods, methods of differential geometry, and numerical analysis methods, the 3D ‘stiff-string drag and torque model’ of the drill string bending and the appropriate software are elaborated for the simulation of the tripping in and out regimes and drilling operations. It is shown by the computer calculations that the contact and friction forces can be calculated and regulated, providing predesigned trouble-free modes of operation. The elaborated mathematic models and software can be used for the emergency situations prognostication and their exclusion at the stages of the drilling process design and realization.Keywords: curvilinear drilling, drill string tripping in and out, contact forces, resistance forces
Procedia PDF Downloads 147576 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning
Authors: Yangzhi Li
Abstract:
Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.Keywords: robotic construction, robotic assembly, visual guidance, machine learning
Procedia PDF Downloads 87575 Computational Investigation on Structural and Functional Impact of Oncogenes and Tumor Suppressor Genes on Cancer
Authors: Abdoulie K. Ceesay
Abstract:
Within the sequence of the whole genome, it is known that 99.9% of the human genome is similar, whilst our difference lies in just 0.1%. Among these minor dissimilarities, the most common type of genetic variations that occurs in a population is SNP, which arises due to nucleotide substitution in a protein sequence that leads to protein destabilization, alteration in dynamics, and other physio-chemical properties’ distortions. While causing variations, they are equally responsible for our difference in the way we respond to a treatment or a disease, including various cancer types. There are two types of SNPs; synonymous single nucleotide polymorphism (sSNP) and non-synonymous single nucleotide polymorphism (nsSNP). sSNP occur in the gene coding region without causing a change in the encoded amino acid, while nsSNP is deleterious due to its replacement of a nucleotide residue in the gene sequence that results in a change in the encoded amino acid. Predicting the effects of cancer related nsSNPs on protein stability, function, and dynamics is important due to the significance of phenotype-genotype association of cancer. In this thesis, Data of 5 oncogenes (ONGs) (AKT1, ALK, ERBB2, KRAS, BRAF) and 5 tumor suppressor genes (TSGs) (ESR1, CASP8, TET2, PALB2, PTEN) were retrieved from ClinVar. Five common in silico tools; Polyphen, Provean, Mutation Assessor, Suspect, and FATHMM, were used to predict and categorize nsSNPs as deleterious, benign, or neutral. To understand the impact of each variation on the phenotype, Maestro, PremPS, Cupsat, and mCSM-NA in silico structural prediction tools were used. This study comprises of in-depth analysis of 10 cancer gene variants downloaded from Clinvar. Various analysis of the genes was conducted to derive a meaningful conclusion from the data. Research done indicated that pathogenic variants are more common among ONGs. Our research also shows that pathogenic and destabilizing variants are more common among ONGs than TSGs. Moreover, our data indicated that ALK(409) and BRAF(86) has higher benign count among ONGs; whilst among TSGs, PALB2(1308) and PTEN(318) genes have higher benign counts. Looking at the individual cancer genes predisposition or frequencies of causing cancer according to our research data, KRAS(76%), BRAF(55%), and ERBB2(36%) among ONGs; and PTEN(29%) and ESR1(17%) among TSGs have higher tendencies of causing cancer. Obtained results can shed light to the future research in order to pave new frontiers in cancer therapies.Keywords: tumor suppressor genes (TSGs), oncogenes (ONGs), non synonymous single nucleotide polymorphism (nsSNP), single nucleotide polymorphism (SNP)
Procedia PDF Downloads 86574 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach
Authors: Nina Ponikvar, Katja Zajc Kejžar
Abstract:
While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia
Procedia PDF Downloads 78573 Department of Social Development/Japan International Cooperation Agency's Journey from South African Community to Southern African Region
Authors: Daisuke Sagiya, Ren Kamioka
Abstract:
South Africa has ratified the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD) on 30th November 2007. In line with this, the Department of Social Development (DSD) revised the White Paper on the Rights of Persons with Disabilities (WPRPD), and the Cabinet approved it on 9th December 2015. The South African government is striving towards the elimination of poverty and inequality in line with UNCRPD and WPRPD. However, there are minimal programmes and services that have been provided to persons with disabilities in the rural community. In order to address current discriminative practices, disunity and limited self-representation in rural community, DSD in cooperation with the Japan International Cooperation Agency (JICA) is implementing the 'Project for the Promotion of Empowerment of Persons with Disabilities and Disability Mainstreaming' from May 2016 to May 2020. The project is targeting rural community as the project sites, namely 1) Collins Chabane municipality, Vhembe district, Limpopo and 2) Maluti-a-Phofung municipality, Thabo Mofutsanyana district, Free State. The project aims at developing good practices on Community-Based Inclusive Development (CBID) at the project sites which will be documented as a guideline and applied in other provinces in South Africa and neighbouring countries (Lesotho, Swaziland, Botswana, Namibia, Zimbabwe, and Mozambique). In cooperation with provincial and district DSD and local government, the project is currently implementing various community activities, for example: Establishment of Self-Help Group (SHG) of persons with disabilities and Peer Counselling in the villages, and will conduct Disability Equality Training (DET) and accessibility workshop in order to enhance the CBID in the project sites. In order to universalise good practices on CBID, the authors will explain lessons learned from the project by utilising the theories of disability and development studies and community psychology such as social model of disability, twin-track approach, empowerment theory, sense of community, helper therapy principle, etc. And the authors conclude that in order to realise social participation of persons with disabilities in rural community, CBID is a strong tool and persons with disabilities must play central roles in all spheres of CBID activities.Keywords: community-based inclusive development, disability mainstreaming, empowerment of persons with disabilities, self-help group
Procedia PDF Downloads 242572 From Vegetarian to Cannibal: A Literary Analysis of a Journey of Innocence in ‘Life of Pi’
Authors: Visvaganthie Moodley
Abstract:
Language use and aesthetic appreciation are integral to meaning-making in prose, as they are in poetry. However, in comparison to poetic analysis, a literary analysis of prose that focuses on linguistics and stylistics is somewhat scarce as it generally requires the study of lengthy texts. Nevertheless, the effect of linguistic and stylistic features in prose as conscious design by authors for creating specific effects and conveying preconceived messages is drawing increasing attention of linguists and literary experts. A close examination of language use in prose can, among a host of literary purposes, convey emotive and cognitive values and contribute to making interpretations about how fictional characters are represented to the imaginative reader. This paper provides a literary analysis of Yann Martel’s narrative of a 14-year-old Indian boy, Pi, who had survived the wreck of a Japanese cargo ship, by focusing on his 227-day journey of tribulations, along with a Bengal tiger, on a lifeboat. The study favours a pluralistic approach blending literary criticism, linguistic analysis and stylistic description. It adopts Leech and Short’s (2007) broad framework of linguistic and stylistic categories (lexical categories, grammatical categories, figures of speech etc. [sic] and context and cohesion) as well as a range of other relevant linguistic phenomena to show how the narrator, Pi, and the author influence the reader’s interpretations of Pi’s character. Such interpretations are made using the lens of Freud’s psychoanalytical theory (which focuses on the interplay of the instinctual id, the ego and the moralistic superego) and Blake’s philosophy of innocence and experience (the two contrary states of the human soul). The paper traces Pi’s transformation from animal-loving, God-fearing vegetarian to brutal animal slayer and cannibal in his journey of survival. By a close examination of the linguistic and stylistic features of the narrative, it argues that, despite evidence of butchery and cannibalism, Pi’s gruesome behaviour is motivated by extreme physiological and psychological duress and not intentional malice. Finally, the paper concludes that the voice of the narrator, Pi, and that of the author, Martel, act as powerful persuasive agents in influencing the reader to respond with a sincere flow of sympathy for Pi and judge him as having retained his innocence in his instinctual need for survival.Keywords: foregrounding, innocence and experience, lexis, literary analysis, psychoanalytical lens, style
Procedia PDF Downloads 170571 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction
Authors: Alisawi Alaa T., Collins P. E. F.
Abstract:
The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard
Procedia PDF Downloads 101570 Sequential Mixed Methods Study to Examine the Potentiality of Blackboard-Based Collaborative Writing as a Solution Tool for Saudi Undergraduate EFL Students’ Writing Difficulties
Authors: Norah Alosayl
Abstract:
English is considered the most important foreign language in the Kingdom of Saudi Arabia (KSA) because of the usefulness of English as a global language compared to Arabic. As students’ desire to improve their English language skills has grown, English writing has been identified as the most difficult problem for Saudi students in their language learning. Although the English language in Saudi Arabia is taught beginning in the seventh grade, many students have problems at the university level, especially in writing, due to a gap between what is taught in secondary and high schools and university expectations- pupils generally study English at school, based on one book with few exercises in vocabulary and grammar exercises, and there are no specific writing lessons. Moreover, from personal teaching experience at King Saud bin Abdulaziz University, students face real problems with their writing. This paper revolves around the blackboard-based collaborative writing to help the undergraduate Saudi EFL students, in their first year enrolled in two sections of ENGL 101 in the first semester of 2021 at King Saud bin Abdulaziz University, practice the most difficult skill they found in their writing through a small group. Therefore, a sequential mixed methods design will be suited. The first phase of the study aims to highlight the most difficult skill experienced by students from an official writing exam that is evaluated by their teachers through an official rubric used in King Saud bin Abdulaziz University. In the second phase, this study will intend to investigate the benefits of social interaction on the process of learning writing. Students will be provided with five collaborative writing tasks via discussion feature on Blackboard to practice a skill that they found difficult in writing. the tasks will be formed based on social constructivist theory and pedagogic frameworks. The interaction will take place between peers and their teachers. The frequencies of students’ participation and the quality of their interaction will be observed through manual counting, screenshotting. This will help the researcher understand how students actively work on the task through the amount of their participation and will also distinguish the type of interaction (on task, about task, or off-task). Semi-structured interviews will be conducted with students to understand their perceptions about the blackboard-based collaborative writing tasks, and questionnaires will be distributed to identify students’ attitudes with the tasks.Keywords: writing difficulties, blackboard-based collaborative writing, process of learning writing, interaction, participations
Procedia PDF Downloads 193569 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose
Authors: Kumar Shashvat, Amol P. Bhondekar
Abstract:
In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.Keywords: odor classification, generative models, naive bayes, linear discriminant analysis
Procedia PDF Downloads 389568 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics
Authors: Janne Engblom, Elias Oikarinen
Abstract:
A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.Keywords: dynamic model, fixed effects, panel data, price dynamics
Procedia PDF Downloads 1510567 Coulomb-Explosion Driven Proton Focusing in an Arched CH Target
Authors: W. Q. Wang, Y. Yin, D. B. Zou, T. P. Yu, J. M. Ouyang, F. Q. Shao
Abstract:
High-energy-density state, i.e., matter and radiation at energy densities in excess of 10^11 J/m^3, is related to material, nuclear physics, astrophysics, and geophysics. Laser-driven particle beams are better suited to heat the matter as a trigger due to their unique properties of ultrashort duration and low emittance. Compared to X-ray and electron sources, it is easier to generate uniformly heated large-volume material for the proton and ion beams because of highly localized energy deposition. With the construction of state-of-art high power laser facilities, creating of extremely conditions of high-temperature and high-density in laboratories becomes possible. It has been demonstrated that on a picosecond time scale the solid density material can be isochorically heated to over 20 eV by the ultrafast proton beam generated from spherically shaped targets. For the above-mentioned technique, the proton energy density plays a crucial role in the formation of warm dense matter states. Recently, several methods have devoted to realize the focusing of the accelerated protons, involving externally exerted static-fields or specially designed targets interacting with a single or multi-pile laser pulses. In previous works, two co-propagating or opposite direction laser pulses are employed to strike a submicron plasma-shell. However, ultra-high pulse intensities, accurately temporal synchronization and undesirable transverse instabilities for a long time are still intractable for currently experimental implementations. A mechanism of the focusing of laser-driven proton beams from two-ion-species arched targets is investigated by multi-dimensional particle-in-cell simulations. When an intense linearly-polarized laser pulse impinges on the thin arched target, all electrons are completely evacuated, leading to a Coulomb-explosive electric-field mostly originated from the heavier carbon ions. The lighter protons in the moving reference frame by the ionic sound speed will be accelerated and effectively focused because of this radially isotropic field. At a 2.42×10^21 W/cm^2 laser intensity, a ballistic proton bunch with its energy-density as high as 2.15×10^17 J/m^3 is produced, and the highest proton energy and the focusing position agree well with that from the theory.Keywords: Coulomb explosion, focusing, high-energy-density, ion acceleration
Procedia PDF Downloads 346566 Temperature Dependent Magneto-Transport Properties of MnAl Binary Alloy Thin Films
Authors: Vineet Barwal, Sajid Husain, Nanhe Kumar Gupta, Soumyarup Hait, Sujeet Chaudhary
Abstract:
High perpendicular magnetic anisotropy (PMA) and low damping constant (α) in ferromagnets are one of the few necessary requirements for their potential applications in the field of spintronics. In this regards, ferromagnetic τ-phase of MnAl possesses the highest PMA (Ku > 107 erg/cc) at room temperature, high saturation magnetization (Ms~800 emu/cc) and a Curie temperature of ~395K. In this work, we have investigated the magnetotransport behaviour of this potentially useful binary system MnₓAl₁₋ₓ films were synthesized by co-sputtering (pulsed DC magnetron sputtering) on Si/SiO₂ (where SiO₂ is native oxide layer) substrate using 99.99% pure Mn and Al sputtering targets. Films of constant thickness (~25 nm) were deposited at the different growth temperature (Tₛ) viz. 30, 300, 400, 500, and 600 ºC with a deposition rate of ~5 nm/min. Prior to deposition, the chamber was pumped down to a base pressure of 2×10⁻⁷ Torr. During sputtering, the chamber was maintained at a pressure of 3.5×10⁻³ Torr with the 55 sccm Ar flow rate. Films were not capped for the purpose of electronic transport measurement, which leaves a possibility of metal oxide formation on the surface of MnAl (both Mn and Al have an affinity towards oxide formation). In-plane and out-of-plane transverse magnetoresistance (MR) measurements on films sputtered under optimized growth conditions revealed non-saturating behavior with MR values ~6% and 40% at 9T, respectively at 275 K. Resistivity shows a parabolic dependence on the field H, when the H is weak. At higher H, non-saturating positive MR that increases exponentially with the strength of magnetic field is observed, a typical character of hopping type conduction mechanism. An anomalous decrease in MR is observed on lowering the temperature. From the temperature dependence of reistivity, it is inferred that the two competing states are metallic and semiconducting, respectively and the energy scale of the phenomenon produces the most interesting effects, i.e., the metal-insulator transition and hence the maximum sensitivity to external fields, at room temperature. Theory of disordered 3D systems effectively explains the crossover temperature coefficient of resistivity from positive to negative with lowering of temperature. These preliminary findings on the MR behavior of MnAl thin films will be presented in detail. The anomalous large MR in mixed phase MnAl system is evidently useful for future spintronic applications.Keywords: magnetoresistance, perpendicular magnetic anisotropy, spintronics, thin films
Procedia PDF Downloads 125565 Assessment of Designed Outdoor Playspaces as Learning Environments and Its Impact on Child’s Wellbeing: A Case of Bhopal, India
Authors: Richa Raje, Anumol Antony
Abstract:
Playing is the foremost stepping stone for childhood development. Play is an essential aspect of a child’s development and learning because it creates meaningful enduring environmental connections and increases children’s performance. The children’s proficiencies are ever varying in their course of growth. There is innovation in the activities, as it kindles the senses, surges the love for exploration, overcomes linguistic barriers and physiological development, which in turn allows them to find their own caliber, spontaneity, curiosity, cognitive skills, and creativity while learning during play. This paper aims to comprehend the learning in play which is the most essential underpinning aspect of the outdoor play area. It also assesses the trend of playgrounds design that is merely hammered with equipment's. It attempts to derive a relation between the natural environment and children’s activities and the emotions/senses that can be evoked in the process. One of the major concerns with our outdoor play is that it is limited to an area with a similar kind of equipment, thus making the play highly regimented and monotonous. This problem is often lead by the strict timetables of our education system that hardly accommodates play. Due to these reasons, the play areas remain neglected both in terms of design that allows learning and wellbeing. Poorly designed spaces fail to inspire the physical, emotional, social and psychological development of the young ones. Currently, the play space has been condensed to an enclosed playground, driveway or backyard which confines the children’s capability to leap the boundaries set for him. The paper emphasizes on study related to kids ranging from 5 to 11 years where the behaviors during their interactions in a playground are mapped and analyzed. The theory of affordance is applied to various outdoor play areas, in order to study and understand the children’s environment and how variedly they perceive and use them. A higher degree of affordance shall form the basis for designing the activities suitable in play spaces. It was observed during their play that, they choose certain spaces of interest majority being natural over other artificial equipment. The activities like rolling on the ground, jumping from a height, molding earth, hiding behind tree, etc. suggest that despite equipment they have an affinity towards nature. Therefore, we as designers need to take a cue from their behavior and practices to be able to design meaningful spaces for them, so the child gets the freedom to test their precincts.Keywords: children, landscape design, learning environment, nature and play, outdoor play
Procedia PDF Downloads 126564 Exploring Mothers' Knowledge and Experiences of Attachment in the First 1000 Days of Their Child's Life
Authors: Athena Pedro, Zandile Batweni, Laura Bradfield, Michael Dare, Ashley Nyman
Abstract:
The rapid growth and development of an infant in the first 1000 days of life means that this time period provides the greatest opportunity for a positive developmental impact on a child’s life socially, emotionally, cognitively and physically. Current research is being focused on children in the first 1000 days, but there is a lack of research and understanding of mothers and their experiences during this crucial time period. Thus, it is imperative that more research is done to help better understand the experiences of mothers during the first 1000 days of their child’s life, as well as gain more insight into mothers’ knowledge regarding this time period. The first 1000 days of life, from conception to two years, is a critical period, and the child’s attachment to his or her mother or primary caregiver during this period is crucial for a multitude of future outcomes. The aim of this study was to explore mothers’ understanding and experience of the first 1000 days of their child’s life, specifically looking at attachment in the context of Bowlby and Ainsworths’ attachment theory. Using a qualitative methodological framework, data were collected through semi-structured individual interviews with 12 first-time mothers from low-income communities in Cape Town. Thematic analysis of the data revealed that mothers articulated the importance of attachment within the first 1000 days of life and shared experiences of how they bond and form attachment with their babies. Furthermore, these mothers expressed their belief in the long-term effects of early attachment of responsive positive parenting as well as the lasting effects of poor attachment and non-responsive parenting. This study has implications for new mothers and healthcare staff working with mothers of new-born babies, as well as for future contextual research. By gaining insight into the mothers’ experiences, policies and intervention efforts can be formulated in order to assist mothers during this time, which ultimately promote the healthy development of the nation’s children and future adult generation. If researchers are also able to understand the extent of mothers’ general knowledge regarding the first 1000 days and attachment, then there will be a better understanding of where there may be gaps in knowledge and thus, recommendations for effective and relevant intervention efforts may be provided. These interventions may increase knowledge and awareness of new mothers and health care workers at clinics and other service providers, creating a high impact on positive outcome. Thus, improving the developmental trajectory for many young babies allows them the opportunity to pursue optimal development by reaching their full potential.Keywords: attachment, experience, first 1000 days, knowledge, mothers
Procedia PDF Downloads 179563 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring
Authors: Zheng Wang, Zhenhong Li, Jon Mills
Abstract:
Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring
Procedia PDF Downloads 163562 Integrating a Six Thinking Hats Approach Into the Prewriting Stage of Argumentative Writing In English as a Foreign Language: A Chinese Case Study of Generating Ideas in Action
Abstract:
Argumentative writing is the most prevalent genre in diverse writing tests. How to construct academic arguments is often regarded as a difficult task by most English as a foreign language (EFL) learners. A failure to generate enough ideas and organise them coherently and logically as well as a lack of competence in supporting their arguments with relevant evidence are frequent problems faced by EFL learners when approaching an English argumentative writing task. Overall, these problems are closely related to planning, and planning an argumentative writing at pre-writing stage plays a vital role in a good academic essay. However, how teachers can effectively guide students to generate ideas is rarely discussed in planning English argumentative writing, apart from brainstorming. Brainstorming has been a common practice used by teachers to help students generate ideas. However, some limitations of brainstorming suggest that it can help students generate many ideas, but ideas might not necessarily be coherent and logic, and could sometimes impede production. It calls for a need to explore effective instructional strategies at pre-writing stage of English argumentative writing. This paper will first examine how a Six Thinking Hats approach can be used to provide a dialogic space for EFL learners to experience and collaboratively generate ideas from multiple perspectives at pre-writing stage. Part of the findings of the impact of a twelve-week intervention (from March to July 2021) on students learning to generate ideas through engaging in group discussions of using Six Thinking Hats will then be reported. The research design is based on the sociocultural theory. The findings present evidence from a mixed-methods approach and fifty-nine participants from two first-year undergraduate natural classes in a Chinese university. Analysis of pre- and post- questionnaires suggests that participants had a positive attitude toward the Six Thinking Hats approach. It fosters their understanding of prewriting and argumentative writing, helps them to generate more ideas not only from multiple perspectives but also in a systematic way. A comparison of participants writing plans confirms an improvement in generating counterarguments and rebuttals to support their arguments. Above all, visual and transcripts data of group discussion collected from different weeks throughout the intervention enable teachers and researchers to ‘see’ the hidden process of learning to generate ideas in action.Keywords: argumentative writing, innovative pedagogy, six thinking hats, dialogic space, prewriting, higher education
Procedia PDF Downloads 88561 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data
Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton
Abstract:
The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.Keywords: analytics, digitization, industry 4.0, manufacturing
Procedia PDF Downloads 113560 The Quantitative Analysis of the Influence of the Superficial Abrasion on the Lifetime of the Frog Rail
Authors: Dong Jiang
Abstract:
Turnout is the essential equipment on the railway, which also belongs to one of the strongest demanded infrastructural facilities of railway on account of the more seriously frog rail failures. In cooperation with Germany Company (DB Systemtechnik AG), our research team focuses on the quantitative analysis about the frog rails to predict their lifetimes. Moreover, the suggestions for the timely and effective maintenances are made to improve the economy of the frog rails. The lifetime of the frog rail depends strongly on the internal damage of the running surface until the breakages occur. On the basis of Hertzian theory of the contact mechanics, the dynamic loads of the running surface are calculated in form of the contact pressures on the running surface and the equivalent tensile stress inside the running surface. According to material mechanics, the strength of the frog rail is determined quantitatively in form of the Stress-cycle (S-N) curve. Under the interaction between the dynamic loads and the strength, the internal damage of the running surface is calculated by means of the linear damage hypothesis of the Miner’s rule. The emergence of the first Breakage on the running surface is to be defined as the failure criterion that the damage degree equals 1.0. From the microscopic perspective, the running surface of the frog rail is divided into numerous segments for the detailed analysis. The internal damage of the segment grows slowly in the beginning and disproportionately quickly in the end until the emergence of the breakage. From the macroscopic perspective, the internal damage of the running surface develops simply always linear along the lifetime. With this linear growth of the internal damages, the lifetime of the frog rail could be predicted simply through the immediate introduction of the slope of the linearity. However, the superficial abrasion plays an essential role in the results of the internal damages from the both perspectives. The influences of the superficial abrasion on the lifetime are described in form of the abrasion rate. It has two contradictory effects. On the one hand, the insufficient abrasion rate causes the concentration of the damage accumulation on the same position below the running surface to accelerate the rail failure. On the other hand, the excessive abrasion rate advances the disappearance of the head hardened surface of the frog rail to result in the untimely breakage on the surface. Thus, the relationship between the abrasion rate and the lifetime is subdivided into an initial phase of the increased lifetime and a subsequent phase of the more rapid decreasing lifetime with the continuous growth of the abrasion rate. Through the compensation of these two effects, the critical abrasion rate is discussed to reach the optimal lifetime.Keywords: breakage, critical abrasion rate, frog rail, internal damage, optimal lifetime
Procedia PDF Downloads 227559 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients
Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho
Abstract:
Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper
Procedia PDF Downloads 147558 Efficiency and Equity in Italian Secondary School
Authors: Giorgia Zotti
Abstract:
This research comprehensively investigates the multifaceted interplay determining school performance, individual backgrounds, and regional disparities within the landscape of Italian secondary education. Leveraging data gleaned from the INVALSI 2021-2022 database, the analysis meticulously scrutinizes two fundamental distributions of educational achievements: the standardized Invalsi test scores and official grades in Italian and Mathematics, focusing specifically on final-year secondary school students in Italy. Applying a comprehensive methodology, the study initially employs Data Envelopment Analysis (DEA) to assess school performances. This methodology involves constructing a production function encompassing inputs (hours spent at school) and outputs (Invalsi scores in Italian and Mathematics, along with official grades in Italian and Math). The DEA approach is applied in both of its versions: traditional and conditional. The latter incorporates environmental variables such as school type, size, demographics, technological resources, and socio-economic indicators. Additionally, the analysis delves into regional disparities by leveraging the Theil Index, providing insights into disparities within and between regions. Moreover, in the frame of the inequality of opportunity theory, the study quantifies the inequality of opportunity in students' educational achievements. The methodology applied is the Parametric Approach in the ex-ante version, considering diverse circumstances like parental education and occupation, gender, school region, birthplace, and language spoken at home. Consequently, a Shapley decomposition is applied to understand how much each circumstance affects the outcomes. The outcomes of this comprehensive investigation unveil pivotal determinants of school performance, notably highlighting the influence of school type (Liceo) and socioeconomic status. The research unveils regional disparities, elucidating instances where specific schools outperform others in official grades compared to Invalsi scores, shedding light on the intricate nature of regional educational inequalities. Furthermore, it emphasizes a heightened inequality of opportunity within the distribution of Invalsi test scores in contrast to official grades, underscoring pronounced disparities at the student level. This analysis provides insights for policymakers, educators, and stakeholders, fostering a nuanced understanding of the complexities within Italian secondary education.Keywords: inequality, education, efficiency, DEA approach
Procedia PDF Downloads 76557 USBware: A Trusted and Multidisciplinary Framework for Enhanced Detection of USB-Based Attacks
Authors: Nir Nissim, Ran Yahalom, Tomer Lancewiki, Yuval Elovici, Boaz Lerner
Abstract:
Background: Attackers increasingly take advantage of innocent users who tend to use USB devices casually, assuming these devices benign when in fact they may carry an embedded malicious behavior or hidden malware. USB devices have many properties and capabilities that have become the subject of malicious operations. Many of the recent attacks targeting individuals, and especially organizations, utilize popular and widely used USB devices, such as mice, keyboards, flash drives, printers, and smartphones. However, current detection tools, techniques, and solutions generally fail to detect both the known and unknown attacks launched via USB devices. Significance: We propose USBWARE, a project that focuses on the vulnerabilities of USB devices and centers on the development of a comprehensive detection framework that relies upon a crucial attack repository. USBWARE will allow researchers and companies to better understand the vulnerabilities and attacks associated with USB devices as well as providing a comprehensive platform for developing detection solutions. Methodology: The framework of USBWARE is aimed at accurate detection of both known and unknown USB-based attacks by a process that efficiently enhances the framework's detection capabilities over time. The framework will integrate two main security approaches in order to enhance the detection of USB-based attacks associated with a variety of USB devices. The first approach is aimed at the detection of known attacks and their variants, whereas the second approach focuses on the detection of unknown attacks. USBWARE will consist of six independent but complimentary detection modules, each detecting attacks based on a different approach or discipline. These modules include novel ideas and algorithms inspired from or already developed within our team's domains of expertise, including cyber security, electrical and signal processing, machine learning, and computational biology. The establishment and maintenance of the USBWARE’s dynamic and up-to-date attack repository will strengthen the capabilities of the USBWARE detection framework. The attack repository’s infrastructure will enable researchers to record, document, create, and simulate existing and new USB-based attacks. This data will be used to maintain the detection framework’s updatability by incorporating knowledge regarding new attacks. Based on our experience in the cyber security domain, we aim to design the USBWARE framework so that it will have several characteristics that are crucial for this type of cyber-security detection solution. Specifically, the USBWARE framework should be: Novel, Multidisciplinary, Trusted, Lightweight, Extendable, Modular and Updatable and Adaptable. Major Findings: Based on our initial survey, we have already found more than 23 types of USB-based attacks, divided into six major categories. Our preliminary evaluation and proof of concepts showed that our detection modules can be used for efficient detection of several basic known USB attacks. Further research, development, and enhancements are required so that USBWARE will be capable to cover all of the major known USB attacks and to detect unknown attacks. Conclusion: USBWARE is a crucial detection framework that must be further enhanced and developed.Keywords: USB, device, cyber security, attack, detection
Procedia PDF Downloads 398556 Equity, Bonds, Institutional Debt and Economic Growth: Evidence from South Africa
Authors: Ashenafi Beyene Fanta, Daniel Makina
Abstract:
Economic theory predicts that finance promotes economic growth. Although the finance-growth link is among the most researched areas in financial economics, our understanding of the link between the two is still incomplete. This is caused by, among others, wrong econometric specifications, using weak proxies of financial development, and inability to address the endogeneity problem. Studies on the finance growth link in South Africa consistently report economic growth driving financial development. Early studies found that economic growth drives financial development in South Africa, and recent studies have confirmed this using different econometric models. However, the monetary aggregate (i.e. M2) utilized used in these studies is considered a weak proxy for financial development. Furthermore, the fact that the models employed do not address the endogeneity problem in the finance-growth link casts doubt on the validity of the conclusions. For this reason, the current study examines the finance growth link in South Africa using data for the period 1990 to 2011 by employing a generalized method of moments (GMM) technique that is capable of addressing endogeneity, simultaneity and omitted variable bias problems. Unlike previous cross country and country case studies that have also used the same technique, our contribution is that we account for the development of bond markets and non-bank financial institutions rather than being limited to stock market and banking sector development. We find that bond market development affects economic growth in South Africa, and no similar effect is observed for the bank and non-bank financial intermediaries and the stock market. Our findings show that examination of individual elements of the financial system is important in understanding the unique effect of each on growth. The observation that bond markets rather than private credit and stock market development promotes economic growth in South Africa induces an intriguing question as to what unique roles bond markets play that the intermediaries and equity markets are unable to play. Crucially, our results support observations in the literature that using appropriate measures of financial development is critical for policy advice. They also support the suggestion that individual elements of the financial system need to be studied separately to consider their unique roles in advancing economic growth. We believe that our understanding of the channels through which bond market contribute to growth would be a fertile ground for future research.Keywords: bond market, finance, financial sector, growth
Procedia PDF Downloads 425555 An Ethnographic Study of Workforce Integration of Health Care Workers with Refugee Backgrounds in Ageing Citizens in Germany
Authors: A. Ham, A. Kuckert-Wostheinrich
Abstract:
Demographic changes, like the ageing population in European countries and shortage of nursing staff, the increasing number of people with severe cognitive impairment, and elderly socially isolated people raise important questions about who will provide long-term care for ageing citizens. Due to the so-called refugee crisis in 2015, some health care institutions for ageing citizens in Europe invited first generation immigrants to start a nursing career and providing them language skills, nursing training, and internships. The aim of this ethnographic research was to explore the social processes affecting workforce integration and how newcomers enact good care in ageing citizens in a German nursing home. By ethnographic fieldwork, 200 hours of participant observations, 25 in-depth interviews with immigrants and established staff, 2 focus groups with 6 immigrants, and 6 established staff members, data were analysed. The health care institution provided the newcomers a nursing program on psychogeriatric theory and nursing skills in the psychogeriatric field and professional oriented language skills. Courses of health prevention and theater plays accompanied the training. The knowledge learned in education could be applied in internships on the wards. Additionally, diversity and inclusivity courses were given to established personal for cultural awareness and sensitivity. They learned to develop a collegial attitude of respect and appreciation, regardless of gender, nationality, ethnicity, religion or belief, age sexual orientation, or disability and identity. The qualitative data has shown that social processes affected workforce integration, like organizational constraints, staff shortages, and a demanding workload. However, zooming in on the interactions between newcomers and residents, we noticed how they tinkered to enact good care by embodied caring, playing games, singing and dancing. By situational acting and practical wisdom in nursing care, the newcomers could meet the needs of ageing residents. Thus, when health care institutions open up nursing programs for newcomers with refugees’ backgrounds and focus on talent instead of shortcomings, we might as well stimulate the unknown competencies, attitudes, skills, and expertise of newcomers and create excellent nurses for excellent care.Keywords: established staff, Germany, nursing, refugees
Procedia PDF Downloads 106554 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case
Authors: Lukas Reznak, Maria Reznakova
Abstract:
Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany
Procedia PDF Downloads 249553 A Comparative Study of the Impact of Membership in International Climate Change Treaties and the Environmental Kuznets Curve (EKC) in Line with Sustainable Development Theories
Authors: Mojtaba Taheri, Saied Reza Ameli
Abstract:
In this research, we have calculated the effect of membership in international climate change treaties for 20 developed countries based on the human development index (HDI) and compared this effect with the process of pollutant reduction in the Environmental Kuznets Curve (EKC) theory. For this purpose, the data related to The real GDP per capita with 2010 constant prices is selected from the World Development Indicators (WDI) database. Ecological Footprint (ECOFP) is the amount of biologically productive land needed to meet human needs and absorb carbon dioxide emissions. It is measured in global hectares (gha), and the data retrieved from the Global Ecological Footprint (2021) database will be used, and we will proceed by examining step by step and performing several series of targeted statistical regressions. We will examine the effects of different control variables, including Energy Consumption Structure (ECS) will be counted as the share of fossil fuel consumption in total energy consumption and will be extracted from The United States Energy Information Administration (EIA) (2021) database. Energy Production (EP) refers to the total production of primary energy by all energy-producing enterprises in one country at a specific time. It is a comprehensive indicator that shows the capacity of energy production in the country, and the data for its 2021 version, like the Energy Consumption Structure, is obtained from (EIA). Financial development (FND) is defined as the ratio of private credit to GDP, and to some extent based on the stock market value, also as a ratio to GDP, and is taken from the (WDI) 2021 version. Trade Openness (TRD) is the sum of exports and imports of goods and services measured as a share of GDP, and we use the (WDI) data (2021) version. Urbanization (URB) is defined as the share of the urban population in the total population, and for this data, we used the (WDI) data source (2021) version. The descriptive statistics of all the investigated variables are presented in the results section. Related to the theories of sustainable development, Environmental Kuznets Curve (EKC) is more significant in the period of study. In this research, we use more than fourteen targeted statistical regressions to purify the net effects of each of the approaches and examine the results.Keywords: climate change, globalization, environmental economics, sustainable development, international climate treaty
Procedia PDF Downloads 72