Search results for: Theory of Reasoned Action
493 Exploring the Impact of Domestic Credit Extension, Government Claims, Inflation, Exchange Rates, and Interest Rates on Manufacturing Output: A Financial Analysis.
Authors: Ojo Johnson Adelakun
Abstract:
This study explores the long-term relationships between manufacturing output (MO) and several economic determinants, interest rate (IR), inflation rate (INF), exchange rate (EX), credit to the private sector (CPSM), gross claims on the government sector (GCGS), using monthly data from March 1966 to December 2023. Employing advanced econometric techniques including Fully Modified Ordinary Least Squares (FMOLS), Dynamic Ordinary Least Squares (DOLS), and Canonical Cointegrating Regression (CCR), the analysis provides several key insights. The findings reveal a positive association between interest rates and manufacturing output, which diverges from traditional economic theory that predicts a negative correlation due to increased borrowing costs. This outcome is attributed to the financial resilience of large enterprises, allowing them to sustain investment in production despite higher interest rates. In addition, inflation demonstrates a positive relationship with manufacturing output, suggesting that stable inflation within target ranges creates a favourable environment for investment in productivity-enhancing technologies. Conversely, the exchange rate shows a negative relationship with manufacturing output, reflecting the adverse effects of currency depreciation on the cost of imported raw materials. The negative impact of CPSM underscores the importance of directing credit efficiently towards productive sectors rather than speculative ventures. Moreover, increased government borrowing appears to crowd out private sector credit, negatively affecting manufacturing output. Overall, the study highlights the need for a coordinated policy approach integrating monetary, fiscal, and financial sector strategies. Policymakers should account for the differential impacts of interest rates, inflation, exchange rates, and credit allocation on various sectors. Ensuring stable inflation, efficient credit distribution, and mitigating exchange rate volatility are critical for supporting manufacturing output and promoting sustainable economic growth. This research provides valuable insights into the economic dynamics influencing manufacturing output and offers policy recommendations tailored to South Africa’s economic context.Keywords: domestic credit, government claims, financial variables, manufacturing output, financial analysis
Procedia PDF Downloads 20492 Internal Financing Constraints and Corporate Investment: Evidence from Indian Manufacturing Firms
Authors: Gaurav Gupta, Jitendra Mahakud
Abstract:
This study focuses on the significance of internal financing constraints on the determination of corporate fixed investments in the case of Indian manufacturing companies. Financing constraints companies which have less internal fund or retained earnings face more transaction and borrowing costs due to imperfections in the capital market. The period of study is 1999-2000 to 2013-2014 and we consider 618 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test, and Hausman test results conclude the suitability of the fixed effect model for the estimation. The cash flow and liquidity of the company have been used as the proxies for the internal financial constraints. In accordance with various theories of corporate investments, we consider other firm specific variable like firm age, firm size, profitability, sales and leverage as the control variables in the model. From the econometric analysis, we find internal cash flow and liquidity have the significant and positive impact on the corporate investments. The variables like cost of capital, sales growth and growth opportunities are found to be significantly determining the corporate investments in India, which is consistent with the neoclassical, accelerator and Tobin’s q theory of corporate investment. To check the robustness of results, we divided the sample on the basis of cash flow and liquidity. Firms having cash flow greater than zero are put under one group, and firms with cash flow less than zero are put under another group. Also, the firms are divided on the basis of liquidity following the same approach. We find that the results are robust to both types of companies having positive and negative cash flow and liquidity. The results for other variables are also in the same line as we find for the whole sample. These findings confirm that internal financing constraints play a significant role for determination of corporate investment in India. The findings of this study have the implications for the corporate managers to focus on the projects having higher expected cash inflows to avoid the financing constraints. Apart from that, they should also maintain adequate liquidity to minimize the external financing costs.Keywords: cash flow, corporate investment, financing constraints, panel data method
Procedia PDF Downloads 243491 Financial Innovations for Companies Offered by Banks: Polish Experience
Authors: Joanna Błach, Anna Doś, Maria Gorczyńska, Monika Wieczorek-Kosmala
Abstract:
Financial innovations can be regarded as the cause and the effect of the evolution of the financial system. Most of financial innovations are created by various financial institutions for their own purposes and needs. However, due to their diversity, financial innovations can be also applied by various business entities (other than financial institutions). This paper focuses on the potential application of financial innovations by non-financial companies. It is assumed that financial innovations may be effectively applied in all fields of corporate financial decisions integrating financial management with the risk management process. Appropriate application of financial innovations may enhance the development of the company and increase its value by improving its financial situation and reducing the level of risk. On the other hand, misused financial innovations may become the source of extra risk for the company threatening its further operation. The main objective of the paper is to identify the major types of financial innovations offered to non-financial companies by the banking system in Poland. It also aims at identifying the main factors determining the creation of financial innovations in the banking system in Poland and indicating future directions of their development. This paper consists of conceptual and empirical part. Conceptual part based on theoretical study is focused on the determinants of the process of financial innovations and their application by the non-financial companies. Theoretical study is followed by the empirical research based on the analysis of the actual offer of the 20 biggest banks operating in Poland with regard to financial innovations offered to SMEs and large corporations. These innovations are classified according to the main functions of the integrated financial management, such as: Financing, investment, working capital management and risk management. Empirical study has proved that the biggest banks operating in the Polish market offer to their business customers many types and classes of financial innovations. This offer appears vast and adequate to the needs and purposes of the Polish non-financial companies. It was observed that financial innovations pertained to financing decisions dominate in the banks’ offer. However, due to high diversification of the offered financial innovations, business customers may effectively apply them in all fields and areas of integrated financial management. It should be underlined, that the banks’ offer is highly dispersed, which may limit the implementation of financial innovations in the corporate finance. It would be also recommended for the banks operating in the Polish market to intensify the education campaign aiming at increasing knowledge about financial innovations among business customers.Keywords: banking products and services, banking sector in Poland, corporate financial management, financial innovations, theory of innovation
Procedia PDF Downloads 303490 Policies for Circular Bioeconomy in Portugal: Barriers and Constraints
Authors: Ana Fonseca, Ana Gouveia, Edgar Ramalho, Rita Henriques, Filipa Figueiredo, João Nunes
Abstract:
Due to persistent climate pressures, there is a need to find a resilient economic system that is regenerative in nature. Bioeconomy offers the possibility of replacing non-renewable and non-biodegradable materials derived from fossil fuels with ones that are renewable and biodegradable, while a Circular Economy aims at sustainable and resource-efficient operations. The term "Circular Bioeconomy", which can be summarized as all activities that transform biomass for its use in various product streams, expresses the interaction between these two ideas. Portugal has a very favourable context to promote a Circular Bioeconomy due to its variety of climates and ecosystems, availability of biologically based resources, location, and geomorphology. Recently, there have been political and legislative efforts to develop the Portuguese Circular Bioeconomy. The Action Plan for a Sustainable Bioeconomy, approved in 2021, is composed of five axes of intervention, ranging from sustainable production and the use of regionally based biological resources to the development of a circular and sustainable bioindustry through research and innovation. However, as some statistics show, Portugal is still far from achieving circularity. According to Eurostat, Portugal has circularity rates of 2.8%, which is the second lowest among the member states of the European Union. Some challenges contribute to this scenario, including sectorial heterogeneity and fragmentation, prevalence of small producers, lack of attractiveness for younger generations, and absence of implementation of collaborative solutions amongst producers and along value chains.Regarding the Portuguese industrial sector, there is a tendency towards complex bureaucratic processes, which leads to economic and financial obstacles and an unclear national strategy. Together with the limited number of incentives the country has to offer to those that pretend to abandon the linear economic model, many entrepreneurs are hesitant to invest the capital needed to make their companies more circular. Absence of disaggregated, georeferenced, and reliable information regarding the actual availability of biological resources is also a major issue. Low literacy on bioeconomy among many of the sectoral agents and in society in general directly impacts the decisions of production and final consumption. The WinBio project seeks to outline a strategic approach for the management of weaknesses/opportunities in the technology transfer process, given the reality of the territory, through road mapping and national and international benchmarking. The developed work included the identification and analysis of agents in the interior region of Portugal, natural endogenous resources, products, and processes associated with potential development. Specific flow of biological wastes, possible value chains, and the potential for replacing critical raw materials with bio-based products was accessed, taking into consideration other countries with a matured bioeconomy. The study found food industry, agriculture, forestry, and fisheries generate huge amounts of waste streams, which in turn provide an opportunity for the establishment of local bio-industries powered by this biomass. The project identified biological resources with potential for replication and applicability in the Portuguese context. The richness of natural resources and potentials known in the interior region of Portugal is a major key to developing the Circular Economy and sustainability of the country.Keywords: circular bioeconomy, interior region of portugal, regional development., public policy
Procedia PDF Downloads 94489 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.Keywords: artificial intelligence, computer science, criminal investigation, digital forensics
Procedia PDF Downloads 213488 Current Zonal Isolation Regulation and Standards: A Compare and Contrast Review in Plug and Abandonment
Authors: Z. A. Al Marhoon, H. S. Al Ramis, C. Teodoriu
Abstract:
Well-integrity is one of the major elements considered for drilling geothermal, oil, and gas wells. Well-integrity is minimizing the risk of unplanned fluid flow in the well bore throughout the well lifetime. Well integrity is maximized by applying technical concepts along with practical practices and strategic planning. These practices are usually governed by standardization and regulation entities. Practices during well construction can affect the integrity of the seal at the time of abandonment. On the other hand, achieving a perfect barrier system is impracticable due to the needed cost. This results in a needed balance between regulations requirements and practical applications. The guidelines are only effective when they are attainable in practical applications. Various governmental regulations and international standards have different guidelines on what constitutes high-quality isolation from unwanted flow. Each regulating or standardization body differ in requirements based on the abandonment objective. Some regulation account more for the environmental impact, water table contamination, and possible leaks. Other regulation might lean towards driving more economical benefits while achieving an acceptable isolation criteria. The research methodology used in this topic is derived from a literature review method combined with a compare and contrast analysis. The literature review on various zonal isolation regulations and standards has been conducted. A review includes guidelines from NORSOK (Norwegian governing entity), BSEE (USA offshore governing entity), API (American Petroleum Institute) combined with ISO (International Standardization Organization). The compare and contrast analysis is conducted by assessing the objective of each abandonment regulations and standardization. The current state of well barrier regulation is in balancing action. From one side of this balance, the environmental impact and complete zonal isolation is considered. The other side of the scale is practical application and associated cost. Some standards provide a fair amount of details concerning technical requirements and are often flexible with the needed associated cost. These guidelines cover environmental impact with laws that prevent major or disastrous environmental effects of improper sealing of wells. Usually these regulations are concerned with the near future of sealing rather than long-term. Consequently, applying these guidelines become more feasible from a cost point of view to the required plugging entities. On the other hand, other regulation have well integrity procedures and regulations that lean toward more restrictions environmentally with an increased associated cost requirements. The environmental impact is detailed and covered with its entirety, including medium to small environmental impact in barrier installing operations. Clear and precise attention to long-term leakage prevention is present in these regulations. The result of the compare and contrast analysis of the literature showed that there are various objectives that might tip the scale from one side of the balance (cost) to the other (sealing quality) especially in reference to zonal isolation. Furthermore, investing in initial well construction is a crucial part of ensuring safe final well abandonment. The safety and the cost saving at the end of the well life cycle is dependent upon a well-constructed isolation systems at the beginning of the life cycle. Long term studies on zonal isolation using various hydraulic or mechanical materials need to take place to further assess permanently abandoned wells to achieve the desired balance. Well drilling and isolation techniques will be more effective when they are operationally feasible and have reasonable associated cost to aid the local economy.Keywords: plug and abandon, P&A regulation, P&A standards, international guidelines, gap analysis
Procedia PDF Downloads 134487 Optimization for Autonomous Robotic Construction by Visual Guidance through Machine Learning
Authors: Yangzhi Li
Abstract:
Network transfer of information and performance customization is now a viable method of digital industrial production in the era of Industry 4.0. Robot platforms and network platforms have grown more important in digital design and construction. The pressing need for novel building techniques is driven by the growing labor scarcity problem and increased awareness of construction safety. Robotic approaches in construction research are regarded as an extension of operational and production tools. Several technological theories related to robot autonomous recognition, which include high-performance computing, physical system modeling, extensive sensor coordination, and dataset deep learning, have not been explored using intelligent construction. Relevant transdisciplinary theory and practice research still has specific gaps. Optimizing high-performance computing and autonomous recognition visual guidance technologies improves the robot's grasp of the scene and capacity for autonomous operation. Intelligent vision guidance technology for industrial robots has a serious issue with camera calibration, and the use of intelligent visual guiding and identification technologies for industrial robots in industrial production has strict accuracy requirements. It can be considered that visual recognition systems have challenges with precision issues. In such a situation, it will directly impact the effectiveness and standard of industrial production, necessitating a strengthening of the visual guiding study on positioning precision in recognition technology. To best facilitate the handling of complicated components, an approach for the visual recognition of parts utilizing machine learning algorithms is proposed. This study will identify the position of target components by detecting the information at the boundary and corner of a dense point cloud and determining the aspect ratio in accordance with the guidelines for the modularization of building components. To collect and use components, operational processing systems assign them to the same coordinate system based on their locations and postures. The RGB image's inclination detection and the depth image's verification will be used to determine the component's present posture. Finally, a virtual environment model for the robot's obstacle-avoidance route will be constructed using the point cloud information.Keywords: robotic construction, robotic assembly, visual guidance, machine learning
Procedia PDF Downloads 87486 Sustainable Urban Regenaration the New Vocabulary and the Timless Grammar of the Urban Tissue
Authors: Ruth Shapira
Abstract:
Introduction: The rapid urbanization of the last century confronts planners, regulatory bodies, developers and most of all the public with seemingly unsolved conflicts regarding values, capital, and wellbeing of the built and un-built urban space. There is an out of control change of scale of the urban form and of the rhythm of the urban life which has known no significant progress in the last 2-3 decades despite the on-growing urban population. It is the objective of this paper to analyze some of these fundamental issues through the case study of a relatively small town in the center of Israel (Kiryat-Ono, 36,000 inhabitants), unfold the deep structure of qualities versus disruptors, present some cure that we have developed to bridge over and humbly suggest a practice that may bring about a sustainable new urban environment based on timeless values of the past, an approach that can be generic for similar cases. Basic Methodologies:The object, the town of Kiryat Ono, shall be experimented upon in a series of four action processes: De-composition, Re-composition, the Centering process and, finally, Controlled Structural Disintegration. Each stage will be based on facts, analysis of previous multidisciplinary interventions on various layers – and the inevitable reaction of the OBJECT, leading to the conclusion based on innovative theoretical and practical methods that we have developed and that we believe are proper for the open ended network, setting the rules for the contemporary urban society to cluster by – thus – a new urban vocabulary based on the old structure of times passed. The Study: Kiryat Ono, was founded 70 years ago as an agricultural settlement and rapidly turned into an urban entity. In spite the massive intensification, the original DNA of the old small town was still deeply embedded, mostly in the quality of the public space and in the sense of clustered communities. In the past 20 years, the recent demand for housing has been addressed to on the national level with recent master plans and urban regeneration policies mostly encouraging individual economic initiatives. Unfortunately, due to the obsolete existing planning platform the present urban renewal is characterized by pressure of developers, a dramatic change in building scale and widespread disintegration of the existing urban and social tissue.Our office was commissioned to conceptualize two master plans for the two contradictory processes of Kiryat Ono’s future: intensification and conservation. Following a comprehensive investigation into the deep structures and qualities of the existing town, we developed a new vocabulary of conservation terms thus redefying the sense of PLACE. The main challenge was to create master plans that should offer a regulatory basis to the accelerated and sporadic development providing for the public good and preserving the characteristics of the place consisting of a tool box of design guidelines that will have the ability to reorganize space along the time axis in a sustainable way. In conclusion: The system of rules that we have developed can generate endless possible patterns making sure that at each implementation fragment an event is created, and a better place is revealed. It takes time and perseverance but it seems to be the way to provide a healthy and sustainable framework for the accelerated urbanization of our chaotic present.Keywords: sustainable urban design, intensification, emergent urban patterns, sustainable housing, compact urban neighborhoods, sustainable regeneration, restoration, complexity, uncertainty, need for change, implications of legislation on local planning
Procedia PDF Downloads 389485 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach
Authors: Nina Ponikvar, Katja Zajc Kejžar
Abstract:
While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia
Procedia PDF Downloads 78484 Department of Social Development/Japan International Cooperation Agency's Journey from South African Community to Southern African Region
Authors: Daisuke Sagiya, Ren Kamioka
Abstract:
South Africa has ratified the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD) on 30th November 2007. In line with this, the Department of Social Development (DSD) revised the White Paper on the Rights of Persons with Disabilities (WPRPD), and the Cabinet approved it on 9th December 2015. The South African government is striving towards the elimination of poverty and inequality in line with UNCRPD and WPRPD. However, there are minimal programmes and services that have been provided to persons with disabilities in the rural community. In order to address current discriminative practices, disunity and limited self-representation in rural community, DSD in cooperation with the Japan International Cooperation Agency (JICA) is implementing the 'Project for the Promotion of Empowerment of Persons with Disabilities and Disability Mainstreaming' from May 2016 to May 2020. The project is targeting rural community as the project sites, namely 1) Collins Chabane municipality, Vhembe district, Limpopo and 2) Maluti-a-Phofung municipality, Thabo Mofutsanyana district, Free State. The project aims at developing good practices on Community-Based Inclusive Development (CBID) at the project sites which will be documented as a guideline and applied in other provinces in South Africa and neighbouring countries (Lesotho, Swaziland, Botswana, Namibia, Zimbabwe, and Mozambique). In cooperation with provincial and district DSD and local government, the project is currently implementing various community activities, for example: Establishment of Self-Help Group (SHG) of persons with disabilities and Peer Counselling in the villages, and will conduct Disability Equality Training (DET) and accessibility workshop in order to enhance the CBID in the project sites. In order to universalise good practices on CBID, the authors will explain lessons learned from the project by utilising the theories of disability and development studies and community psychology such as social model of disability, twin-track approach, empowerment theory, sense of community, helper therapy principle, etc. And the authors conclude that in order to realise social participation of persons with disabilities in rural community, CBID is a strong tool and persons with disabilities must play central roles in all spheres of CBID activities.Keywords: community-based inclusive development, disability mainstreaming, empowerment of persons with disabilities, self-help group
Procedia PDF Downloads 242483 Phytochemical Investigation, Leaf Structure and Antimicrobial Screening of Pistacia lentiscus against Multi-Drug Resistant Bacteria
Authors: S. Mamoucha, N.Tsafantakis, T. Ioannidis, S. Chatzipanagiotou, C. Nikolaou, L. Skaltsounis, N. Fokialakis, N. Christodoulakis
Abstract:
Introduction: Pistacia lentiscus L. (well known as Mastic tree) is an evergreen sclerophyllous shrub that extensively thrives in the eastern Mediterranean area yet only the trees cultivated in the southern region of the Greek island Chios produces mastic resin. Different parts of P. lentiscus L. var. chia have been used in folk medicine for various purposes, such as tonic, aphrodisiac, antiseptic, antihypertensive and management of dental, gastrointestinal, liver, urinary, and respiratory tract disorders. Several studies have focused on the antibacterial activity of its resin (gum) and its essential oil. However, there is no study combining anatomy of the plant organs, phytochemical profile, and antibacterial screening of the plant. In our attempt to discover novel bioactive metabolites from the mastic tree, we screened its antibacterial activity not only against ATCC strains but also against clinical, resistant strains. Materials-methods: Leaves were investigated using Transmission (ΤΕΜ) and Scanning Εlectron Microscopy (SEM). Histochemical tests were performed on fresh and fixed tissue. Extracts prepared from dried, powdered leaves using 3 different solvents (DCM, MeOH and H2O) the waste water obtained after a hydrodistillation process for essential oil production were screened for their phytochemical content and antibacterial activity. Μetabolite profiling of polar and non-polar extracts was recorded by GC-MS and LC-HRMS techniques and analyzed using in-house and commercial libraries. The antibacterial screening was performed against Staphylococcus aureus ATCC25923, Escherichia coli ATCC25922, Pseudomonas aeruginosa ATCC27853 and against clinical, resistant strains Methicillin-resistant S. aureus (MRSA), Carbapenem-Resistant Metallo-β-Lactamase (carbapenemase) P. aeruginosa (VIM), Klebsiella pneumoniae carbapenemases (KPCs) and Acinetobacter baumanii resistant strains. The antibacterial activity was tested by the Kirby Bauer and the Agar Well Diffusion method. The zone of inhibition (ZI) of each extract was measured and compared with those of common antibiotics. Results: Leaf is compact with inosclereids and numerous idioblasts containing a globular, spiny crystal. The major nerves of the leaf contain a resin duct. Mesophyll cells showed accumulation of osmiophillic metabolites. Histochemical treatments defined secondary metabolites in subcellular localization. The phytochemical investigation revealed the presence of a large number of secondary metabolites, belonging to different chemical groups, such as terpenoids, phenolic compounds (mainly myricetin, kaempferol and quercetin glycosides), phenolic, and fatty acids. Among the extracts, the hydrostillation wastewater achieved the best results against most of the bacteria tested. MRSA, VIM and A. baumanii were inhibited. Conclusion: Extracts from plants have recently been of great interest with respect to their antimicrobial activity. Their use emerged from a growing tendency to replace synthetic antimicrobial agents with natural ones. Leaves of P. lentiscus L. var. chia showed a high antimicrobial activity even against drug - resistant bacteria. Future prospects concern the better understanding of mode of action of the antibacterial activity, the isolation of the most bioactive constituents and the clarification if the activity is related to a single compound or to the synergistic effect of several ones.Keywords: antibacterial screening, leaf anatomy, phytochemical profile, Pistacia lentiscus var. chia
Procedia PDF Downloads 275482 From Vegetarian to Cannibal: A Literary Analysis of a Journey of Innocence in ‘Life of Pi’
Authors: Visvaganthie Moodley
Abstract:
Language use and aesthetic appreciation are integral to meaning-making in prose, as they are in poetry. However, in comparison to poetic analysis, a literary analysis of prose that focuses on linguistics and stylistics is somewhat scarce as it generally requires the study of lengthy texts. Nevertheless, the effect of linguistic and stylistic features in prose as conscious design by authors for creating specific effects and conveying preconceived messages is drawing increasing attention of linguists and literary experts. A close examination of language use in prose can, among a host of literary purposes, convey emotive and cognitive values and contribute to making interpretations about how fictional characters are represented to the imaginative reader. This paper provides a literary analysis of Yann Martel’s narrative of a 14-year-old Indian boy, Pi, who had survived the wreck of a Japanese cargo ship, by focusing on his 227-day journey of tribulations, along with a Bengal tiger, on a lifeboat. The study favours a pluralistic approach blending literary criticism, linguistic analysis and stylistic description. It adopts Leech and Short’s (2007) broad framework of linguistic and stylistic categories (lexical categories, grammatical categories, figures of speech etc. [sic] and context and cohesion) as well as a range of other relevant linguistic phenomena to show how the narrator, Pi, and the author influence the reader’s interpretations of Pi’s character. Such interpretations are made using the lens of Freud’s psychoanalytical theory (which focuses on the interplay of the instinctual id, the ego and the moralistic superego) and Blake’s philosophy of innocence and experience (the two contrary states of the human soul). The paper traces Pi’s transformation from animal-loving, God-fearing vegetarian to brutal animal slayer and cannibal in his journey of survival. By a close examination of the linguistic and stylistic features of the narrative, it argues that, despite evidence of butchery and cannibalism, Pi’s gruesome behaviour is motivated by extreme physiological and psychological duress and not intentional malice. Finally, the paper concludes that the voice of the narrator, Pi, and that of the author, Martel, act as powerful persuasive agents in influencing the reader to respond with a sincere flow of sympathy for Pi and judge him as having retained his innocence in his instinctual need for survival.Keywords: foregrounding, innocence and experience, lexis, literary analysis, psychoanalytical lens, style
Procedia PDF Downloads 170481 Sequential Mixed Methods Study to Examine the Potentiality of Blackboard-Based Collaborative Writing as a Solution Tool for Saudi Undergraduate EFL Students’ Writing Difficulties
Authors: Norah Alosayl
Abstract:
English is considered the most important foreign language in the Kingdom of Saudi Arabia (KSA) because of the usefulness of English as a global language compared to Arabic. As students’ desire to improve their English language skills has grown, English writing has been identified as the most difficult problem for Saudi students in their language learning. Although the English language in Saudi Arabia is taught beginning in the seventh grade, many students have problems at the university level, especially in writing, due to a gap between what is taught in secondary and high schools and university expectations- pupils generally study English at school, based on one book with few exercises in vocabulary and grammar exercises, and there are no specific writing lessons. Moreover, from personal teaching experience at King Saud bin Abdulaziz University, students face real problems with their writing. This paper revolves around the blackboard-based collaborative writing to help the undergraduate Saudi EFL students, in their first year enrolled in two sections of ENGL 101 in the first semester of 2021 at King Saud bin Abdulaziz University, practice the most difficult skill they found in their writing through a small group. Therefore, a sequential mixed methods design will be suited. The first phase of the study aims to highlight the most difficult skill experienced by students from an official writing exam that is evaluated by their teachers through an official rubric used in King Saud bin Abdulaziz University. In the second phase, this study will intend to investigate the benefits of social interaction on the process of learning writing. Students will be provided with five collaborative writing tasks via discussion feature on Blackboard to practice a skill that they found difficult in writing. the tasks will be formed based on social constructivist theory and pedagogic frameworks. The interaction will take place between peers and their teachers. The frequencies of students’ participation and the quality of their interaction will be observed through manual counting, screenshotting. This will help the researcher understand how students actively work on the task through the amount of their participation and will also distinguish the type of interaction (on task, about task, or off-task). Semi-structured interviews will be conducted with students to understand their perceptions about the blackboard-based collaborative writing tasks, and questionnaires will be distributed to identify students’ attitudes with the tasks.Keywords: writing difficulties, blackboard-based collaborative writing, process of learning writing, interaction, participations
Procedia PDF Downloads 193480 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose
Authors: Kumar Shashvat, Amol P. Bhondekar
Abstract:
In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.Keywords: odor classification, generative models, naive bayes, linear discriminant analysis
Procedia PDF Downloads 390479 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics
Authors: Janne Engblom, Elias Oikarinen
Abstract:
A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.Keywords: dynamic model, fixed effects, panel data, price dynamics
Procedia PDF Downloads 1510478 Coulomb-Explosion Driven Proton Focusing in an Arched CH Target
Authors: W. Q. Wang, Y. Yin, D. B. Zou, T. P. Yu, J. M. Ouyang, F. Q. Shao
Abstract:
High-energy-density state, i.e., matter and radiation at energy densities in excess of 10^11 J/m^3, is related to material, nuclear physics, astrophysics, and geophysics. Laser-driven particle beams are better suited to heat the matter as a trigger due to their unique properties of ultrashort duration and low emittance. Compared to X-ray and electron sources, it is easier to generate uniformly heated large-volume material for the proton and ion beams because of highly localized energy deposition. With the construction of state-of-art high power laser facilities, creating of extremely conditions of high-temperature and high-density in laboratories becomes possible. It has been demonstrated that on a picosecond time scale the solid density material can be isochorically heated to over 20 eV by the ultrafast proton beam generated from spherically shaped targets. For the above-mentioned technique, the proton energy density plays a crucial role in the formation of warm dense matter states. Recently, several methods have devoted to realize the focusing of the accelerated protons, involving externally exerted static-fields or specially designed targets interacting with a single or multi-pile laser pulses. In previous works, two co-propagating or opposite direction laser pulses are employed to strike a submicron plasma-shell. However, ultra-high pulse intensities, accurately temporal synchronization and undesirable transverse instabilities for a long time are still intractable for currently experimental implementations. A mechanism of the focusing of laser-driven proton beams from two-ion-species arched targets is investigated by multi-dimensional particle-in-cell simulations. When an intense linearly-polarized laser pulse impinges on the thin arched target, all electrons are completely evacuated, leading to a Coulomb-explosive electric-field mostly originated from the heavier carbon ions. The lighter protons in the moving reference frame by the ionic sound speed will be accelerated and effectively focused because of this radially isotropic field. At a 2.42×10^21 W/cm^2 laser intensity, a ballistic proton bunch with its energy-density as high as 2.15×10^17 J/m^3 is produced, and the highest proton energy and the focusing position agree well with that from the theory.Keywords: Coulomb explosion, focusing, high-energy-density, ion acceleration
Procedia PDF Downloads 346477 Temperature Dependent Magneto-Transport Properties of MnAl Binary Alloy Thin Films
Authors: Vineet Barwal, Sajid Husain, Nanhe Kumar Gupta, Soumyarup Hait, Sujeet Chaudhary
Abstract:
High perpendicular magnetic anisotropy (PMA) and low damping constant (α) in ferromagnets are one of the few necessary requirements for their potential applications in the field of spintronics. In this regards, ferromagnetic τ-phase of MnAl possesses the highest PMA (Ku > 107 erg/cc) at room temperature, high saturation magnetization (Ms~800 emu/cc) and a Curie temperature of ~395K. In this work, we have investigated the magnetotransport behaviour of this potentially useful binary system MnₓAl₁₋ₓ films were synthesized by co-sputtering (pulsed DC magnetron sputtering) on Si/SiO₂ (where SiO₂ is native oxide layer) substrate using 99.99% pure Mn and Al sputtering targets. Films of constant thickness (~25 nm) were deposited at the different growth temperature (Tₛ) viz. 30, 300, 400, 500, and 600 ºC with a deposition rate of ~5 nm/min. Prior to deposition, the chamber was pumped down to a base pressure of 2×10⁻⁷ Torr. During sputtering, the chamber was maintained at a pressure of 3.5×10⁻³ Torr with the 55 sccm Ar flow rate. Films were not capped for the purpose of electronic transport measurement, which leaves a possibility of metal oxide formation on the surface of MnAl (both Mn and Al have an affinity towards oxide formation). In-plane and out-of-plane transverse magnetoresistance (MR) measurements on films sputtered under optimized growth conditions revealed non-saturating behavior with MR values ~6% and 40% at 9T, respectively at 275 K. Resistivity shows a parabolic dependence on the field H, when the H is weak. At higher H, non-saturating positive MR that increases exponentially with the strength of magnetic field is observed, a typical character of hopping type conduction mechanism. An anomalous decrease in MR is observed on lowering the temperature. From the temperature dependence of reistivity, it is inferred that the two competing states are metallic and semiconducting, respectively and the energy scale of the phenomenon produces the most interesting effects, i.e., the metal-insulator transition and hence the maximum sensitivity to external fields, at room temperature. Theory of disordered 3D systems effectively explains the crossover temperature coefficient of resistivity from positive to negative with lowering of temperature. These preliminary findings on the MR behavior of MnAl thin films will be presented in detail. The anomalous large MR in mixed phase MnAl system is evidently useful for future spintronic applications.Keywords: magnetoresistance, perpendicular magnetic anisotropy, spintronics, thin films
Procedia PDF Downloads 125476 Assessment of Designed Outdoor Playspaces as Learning Environments and Its Impact on Child’s Wellbeing: A Case of Bhopal, India
Authors: Richa Raje, Anumol Antony
Abstract:
Playing is the foremost stepping stone for childhood development. Play is an essential aspect of a child’s development and learning because it creates meaningful enduring environmental connections and increases children’s performance. The children’s proficiencies are ever varying in their course of growth. There is innovation in the activities, as it kindles the senses, surges the love for exploration, overcomes linguistic barriers and physiological development, which in turn allows them to find their own caliber, spontaneity, curiosity, cognitive skills, and creativity while learning during play. This paper aims to comprehend the learning in play which is the most essential underpinning aspect of the outdoor play area. It also assesses the trend of playgrounds design that is merely hammered with equipment's. It attempts to derive a relation between the natural environment and children’s activities and the emotions/senses that can be evoked in the process. One of the major concerns with our outdoor play is that it is limited to an area with a similar kind of equipment, thus making the play highly regimented and monotonous. This problem is often lead by the strict timetables of our education system that hardly accommodates play. Due to these reasons, the play areas remain neglected both in terms of design that allows learning and wellbeing. Poorly designed spaces fail to inspire the physical, emotional, social and psychological development of the young ones. Currently, the play space has been condensed to an enclosed playground, driveway or backyard which confines the children’s capability to leap the boundaries set for him. The paper emphasizes on study related to kids ranging from 5 to 11 years where the behaviors during their interactions in a playground are mapped and analyzed. The theory of affordance is applied to various outdoor play areas, in order to study and understand the children’s environment and how variedly they perceive and use them. A higher degree of affordance shall form the basis for designing the activities suitable in play spaces. It was observed during their play that, they choose certain spaces of interest majority being natural over other artificial equipment. The activities like rolling on the ground, jumping from a height, molding earth, hiding behind tree, etc. suggest that despite equipment they have an affinity towards nature. Therefore, we as designers need to take a cue from their behavior and practices to be able to design meaningful spaces for them, so the child gets the freedom to test their precincts.Keywords: children, landscape design, learning environment, nature and play, outdoor play
Procedia PDF Downloads 126475 A Mixed Finite Element Formulation for Functionally Graded Micro-Beam Resting on Two-Parameter Elastic Foundation
Authors: Cagri Mollamahmutoglu, Aykut Levent, Ali Mercan
Abstract:
Micro-beams are one of the most common components of Nano-Electromechanical Systems (NEMS) and Micro Electromechanical Systems (MEMS). For this reason, static bending, buckling, and free vibration analysis of micro-beams have been the subject of many studies. In addition, micro-beams restrained with elastic type foundations have been of particular interest. In the analysis of microstructures, closed-form solutions are proposed when available, but most of the time solutions are based on numerical methods due to the complex nature of the resulting differential equations. Thus, a robust and efficient solution method has great importance. In this study, a mixed finite element formulation is obtained for a functionally graded Timoshenko micro-beam resting on two-parameter elastic foundation. In the formulation modified couple stress theory is utilized for the micro-scale effects. The equation of motion and boundary conditions are derived according to Hamilton’s principle. A functional, derived through a scientific procedure based on Gateaux Differential, is proposed for the bending and buckling analysis which is equivalent to the governing equations and boundary conditions. Most important advantage of the formulation is that the mixed finite element formulation allows usage of C₀ type continuous shape functions. Thus shear-locking is avoided in a built-in manner. Also, element matrices are sparsely populated and can be easily calculated with closed-form integration. In this framework results concerning the effects of micro-scale length parameter, power-law parameter, aspect ratio and coefficients of partially or fully continuous elastic foundation over the static bending, buckling, and free vibration response of FG-micro-beam under various boundary conditions are presented and compared with existing literature. Performance characteristics of the presented formulation were evaluated concerning other numerical methods such as generalized differential quadrature method (GDQM). It is found that with less computational burden similar convergence characteristics were obtained. Moreover, formulation also includes a direct calculation of the micro-scale related contributions to the structural response as well.Keywords: micro-beam, functionally graded materials, two-paramater elastic foundation, mixed finite element method
Procedia PDF Downloads 163474 Exploring Mothers' Knowledge and Experiences of Attachment in the First 1000 Days of Their Child's Life
Authors: Athena Pedro, Zandile Batweni, Laura Bradfield, Michael Dare, Ashley Nyman
Abstract:
The rapid growth and development of an infant in the first 1000 days of life means that this time period provides the greatest opportunity for a positive developmental impact on a child’s life socially, emotionally, cognitively and physically. Current research is being focused on children in the first 1000 days, but there is a lack of research and understanding of mothers and their experiences during this crucial time period. Thus, it is imperative that more research is done to help better understand the experiences of mothers during the first 1000 days of their child’s life, as well as gain more insight into mothers’ knowledge regarding this time period. The first 1000 days of life, from conception to two years, is a critical period, and the child’s attachment to his or her mother or primary caregiver during this period is crucial for a multitude of future outcomes. The aim of this study was to explore mothers’ understanding and experience of the first 1000 days of their child’s life, specifically looking at attachment in the context of Bowlby and Ainsworths’ attachment theory. Using a qualitative methodological framework, data were collected through semi-structured individual interviews with 12 first-time mothers from low-income communities in Cape Town. Thematic analysis of the data revealed that mothers articulated the importance of attachment within the first 1000 days of life and shared experiences of how they bond and form attachment with their babies. Furthermore, these mothers expressed their belief in the long-term effects of early attachment of responsive positive parenting as well as the lasting effects of poor attachment and non-responsive parenting. This study has implications for new mothers and healthcare staff working with mothers of new-born babies, as well as for future contextual research. By gaining insight into the mothers’ experiences, policies and intervention efforts can be formulated in order to assist mothers during this time, which ultimately promote the healthy development of the nation’s children and future adult generation. If researchers are also able to understand the extent of mothers’ general knowledge regarding the first 1000 days and attachment, then there will be a better understanding of where there may be gaps in knowledge and thus, recommendations for effective and relevant intervention efforts may be provided. These interventions may increase knowledge and awareness of new mothers and health care workers at clinics and other service providers, creating a high impact on positive outcome. Thus, improving the developmental trajectory for many young babies allows them the opportunity to pursue optimal development by reaching their full potential.Keywords: attachment, experience, first 1000 days, knowledge, mothers
Procedia PDF Downloads 179473 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data
Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton
Abstract:
The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.Keywords: analytics, digitization, industry 4.0, manufacturing
Procedia PDF Downloads 113472 Dividend Policy in Family Controlling Firms from a Governance Perspective: Empirical Evidence in Thailand
Authors: Tanapond S.
Abstract:
Typically, most of the controlling firms are relate to family firms which are widespread and important for economic growth particularly in Asian Pacific region. The unique characteristics of the controlling families tend to play an important role in determining the corporate policies such as dividend policy. Given the complexity of the family business phenomenon, the empirical evidence has been unclear on how the families behind business groups influence dividend policy in Asian markets with the prevalent existence of cross-shareholdings and pyramidal structure. Dividend policy as one of an important determinant of firm value could also be implemented in order to examine the effect of the controlling families behind business groups on strategic decisions-making in terms of a governance perspective and agency problems. The purpose of this paper is to investigate the impact of ownership structure and concentration which are influential internal corporate governance mechanisms in family firms on dividend decision-making. Using panel data and constructing a unique dataset of family ownership and control through hand-collecting information from the nonfinancial companies listed in Stock Exchange of Thailand (SET) between 2000 and 2015, the study finds that family firms with large stakes distribute higher dividends than family firms with small stakes. Family ownership can mitigate the agency problems and the expropriation of minority investors in family firms. To provide insight into the distinguish between ownership rights and control rights, this study examines specific firm characteristics including the degrees of concentration of controlling shareholders by classifying family ownership in different categories. The results show that controlling families with large deviation between voting rights and cash flow rights have more power and affect lower dividend payment. These situations become worse when second blockholders are families. To the best knowledge of the researcher, this study is the first to examine the association between family firms’ characteristics and dividend policy from the corporate governance perspectives in Thailand with weak investor protection environment and high ownership concentration. This research also underscores the importance of family control especially in a context in which family business groups and pyramidal structure are prevalent. As a result, academics and policy makers can develop markets and corporate policies to eliminate agency problem.Keywords: agency theory, dividend policy, family control, Thailand
Procedia PDF Downloads 292471 Commercial Winding for Superconducting Cables and Magnets
Authors: Glenn Auld Knierim
Abstract:
Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable
Procedia PDF Downloads 141470 The Quantitative Analysis of the Influence of the Superficial Abrasion on the Lifetime of the Frog Rail
Authors: Dong Jiang
Abstract:
Turnout is the essential equipment on the railway, which also belongs to one of the strongest demanded infrastructural facilities of railway on account of the more seriously frog rail failures. In cooperation with Germany Company (DB Systemtechnik AG), our research team focuses on the quantitative analysis about the frog rails to predict their lifetimes. Moreover, the suggestions for the timely and effective maintenances are made to improve the economy of the frog rails. The lifetime of the frog rail depends strongly on the internal damage of the running surface until the breakages occur. On the basis of Hertzian theory of the contact mechanics, the dynamic loads of the running surface are calculated in form of the contact pressures on the running surface and the equivalent tensile stress inside the running surface. According to material mechanics, the strength of the frog rail is determined quantitatively in form of the Stress-cycle (S-N) curve. Under the interaction between the dynamic loads and the strength, the internal damage of the running surface is calculated by means of the linear damage hypothesis of the Miner’s rule. The emergence of the first Breakage on the running surface is to be defined as the failure criterion that the damage degree equals 1.0. From the microscopic perspective, the running surface of the frog rail is divided into numerous segments for the detailed analysis. The internal damage of the segment grows slowly in the beginning and disproportionately quickly in the end until the emergence of the breakage. From the macroscopic perspective, the internal damage of the running surface develops simply always linear along the lifetime. With this linear growth of the internal damages, the lifetime of the frog rail could be predicted simply through the immediate introduction of the slope of the linearity. However, the superficial abrasion plays an essential role in the results of the internal damages from the both perspectives. The influences of the superficial abrasion on the lifetime are described in form of the abrasion rate. It has two contradictory effects. On the one hand, the insufficient abrasion rate causes the concentration of the damage accumulation on the same position below the running surface to accelerate the rail failure. On the other hand, the excessive abrasion rate advances the disappearance of the head hardened surface of the frog rail to result in the untimely breakage on the surface. Thus, the relationship between the abrasion rate and the lifetime is subdivided into an initial phase of the increased lifetime and a subsequent phase of the more rapid decreasing lifetime with the continuous growth of the abrasion rate. Through the compensation of these two effects, the critical abrasion rate is discussed to reach the optimal lifetime.Keywords: breakage, critical abrasion rate, frog rail, internal damage, optimal lifetime
Procedia PDF Downloads 227469 Efficiency and Equity in Italian Secondary School
Authors: Giorgia Zotti
Abstract:
This research comprehensively investigates the multifaceted interplay determining school performance, individual backgrounds, and regional disparities within the landscape of Italian secondary education. Leveraging data gleaned from the INVALSI 2021-2022 database, the analysis meticulously scrutinizes two fundamental distributions of educational achievements: the standardized Invalsi test scores and official grades in Italian and Mathematics, focusing specifically on final-year secondary school students in Italy. Applying a comprehensive methodology, the study initially employs Data Envelopment Analysis (DEA) to assess school performances. This methodology involves constructing a production function encompassing inputs (hours spent at school) and outputs (Invalsi scores in Italian and Mathematics, along with official grades in Italian and Math). The DEA approach is applied in both of its versions: traditional and conditional. The latter incorporates environmental variables such as school type, size, demographics, technological resources, and socio-economic indicators. Additionally, the analysis delves into regional disparities by leveraging the Theil Index, providing insights into disparities within and between regions. Moreover, in the frame of the inequality of opportunity theory, the study quantifies the inequality of opportunity in students' educational achievements. The methodology applied is the Parametric Approach in the ex-ante version, considering diverse circumstances like parental education and occupation, gender, school region, birthplace, and language spoken at home. Consequently, a Shapley decomposition is applied to understand how much each circumstance affects the outcomes. The outcomes of this comprehensive investigation unveil pivotal determinants of school performance, notably highlighting the influence of school type (Liceo) and socioeconomic status. The research unveils regional disparities, elucidating instances where specific schools outperform others in official grades compared to Invalsi scores, shedding light on the intricate nature of regional educational inequalities. Furthermore, it emphasizes a heightened inequality of opportunity within the distribution of Invalsi test scores in contrast to official grades, underscoring pronounced disparities at the student level. This analysis provides insights for policymakers, educators, and stakeholders, fostering a nuanced understanding of the complexities within Italian secondary education.Keywords: inequality, education, efficiency, DEA approach
Procedia PDF Downloads 76468 Equity, Bonds, Institutional Debt and Economic Growth: Evidence from South Africa
Authors: Ashenafi Beyene Fanta, Daniel Makina
Abstract:
Economic theory predicts that finance promotes economic growth. Although the finance-growth link is among the most researched areas in financial economics, our understanding of the link between the two is still incomplete. This is caused by, among others, wrong econometric specifications, using weak proxies of financial development, and inability to address the endogeneity problem. Studies on the finance growth link in South Africa consistently report economic growth driving financial development. Early studies found that economic growth drives financial development in South Africa, and recent studies have confirmed this using different econometric models. However, the monetary aggregate (i.e. M2) utilized used in these studies is considered a weak proxy for financial development. Furthermore, the fact that the models employed do not address the endogeneity problem in the finance-growth link casts doubt on the validity of the conclusions. For this reason, the current study examines the finance growth link in South Africa using data for the period 1990 to 2011 by employing a generalized method of moments (GMM) technique that is capable of addressing endogeneity, simultaneity and omitted variable bias problems. Unlike previous cross country and country case studies that have also used the same technique, our contribution is that we account for the development of bond markets and non-bank financial institutions rather than being limited to stock market and banking sector development. We find that bond market development affects economic growth in South Africa, and no similar effect is observed for the bank and non-bank financial intermediaries and the stock market. Our findings show that examination of individual elements of the financial system is important in understanding the unique effect of each on growth. The observation that bond markets rather than private credit and stock market development promotes economic growth in South Africa induces an intriguing question as to what unique roles bond markets play that the intermediaries and equity markets are unable to play. Crucially, our results support observations in the literature that using appropriate measures of financial development is critical for policy advice. They also support the suggestion that individual elements of the financial system need to be studied separately to consider their unique roles in advancing economic growth. We believe that our understanding of the channels through which bond market contribute to growth would be a fertile ground for future research.Keywords: bond market, finance, financial sector, growth
Procedia PDF Downloads 425467 An Ethnographic Study of Workforce Integration of Health Care Workers with Refugee Backgrounds in Ageing Citizens in Germany
Authors: A. Ham, A. Kuckert-Wostheinrich
Abstract:
Demographic changes, like the ageing population in European countries and shortage of nursing staff, the increasing number of people with severe cognitive impairment, and elderly socially isolated people raise important questions about who will provide long-term care for ageing citizens. Due to the so-called refugee crisis in 2015, some health care institutions for ageing citizens in Europe invited first generation immigrants to start a nursing career and providing them language skills, nursing training, and internships. The aim of this ethnographic research was to explore the social processes affecting workforce integration and how newcomers enact good care in ageing citizens in a German nursing home. By ethnographic fieldwork, 200 hours of participant observations, 25 in-depth interviews with immigrants and established staff, 2 focus groups with 6 immigrants, and 6 established staff members, data were analysed. The health care institution provided the newcomers a nursing program on psychogeriatric theory and nursing skills in the psychogeriatric field and professional oriented language skills. Courses of health prevention and theater plays accompanied the training. The knowledge learned in education could be applied in internships on the wards. Additionally, diversity and inclusivity courses were given to established personal for cultural awareness and sensitivity. They learned to develop a collegial attitude of respect and appreciation, regardless of gender, nationality, ethnicity, religion or belief, age sexual orientation, or disability and identity. The qualitative data has shown that social processes affected workforce integration, like organizational constraints, staff shortages, and a demanding workload. However, zooming in on the interactions between newcomers and residents, we noticed how they tinkered to enact good care by embodied caring, playing games, singing and dancing. By situational acting and practical wisdom in nursing care, the newcomers could meet the needs of ageing residents. Thus, when health care institutions open up nursing programs for newcomers with refugees’ backgrounds and focus on talent instead of shortcomings, we might as well stimulate the unknown competencies, attitudes, skills, and expertise of newcomers and create excellent nurses for excellent care.Keywords: established staff, Germany, nursing, refugees
Procedia PDF Downloads 106466 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case
Authors: Lukas Reznak, Maria Reznakova
Abstract:
Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany
Procedia PDF Downloads 249465 A Comparative Study of the Impact of Membership in International Climate Change Treaties and the Environmental Kuznets Curve (EKC) in Line with Sustainable Development Theories
Authors: Mojtaba Taheri, Saied Reza Ameli
Abstract:
In this research, we have calculated the effect of membership in international climate change treaties for 20 developed countries based on the human development index (HDI) and compared this effect with the process of pollutant reduction in the Environmental Kuznets Curve (EKC) theory. For this purpose, the data related to The real GDP per capita with 2010 constant prices is selected from the World Development Indicators (WDI) database. Ecological Footprint (ECOFP) is the amount of biologically productive land needed to meet human needs and absorb carbon dioxide emissions. It is measured in global hectares (gha), and the data retrieved from the Global Ecological Footprint (2021) database will be used, and we will proceed by examining step by step and performing several series of targeted statistical regressions. We will examine the effects of different control variables, including Energy Consumption Structure (ECS) will be counted as the share of fossil fuel consumption in total energy consumption and will be extracted from The United States Energy Information Administration (EIA) (2021) database. Energy Production (EP) refers to the total production of primary energy by all energy-producing enterprises in one country at a specific time. It is a comprehensive indicator that shows the capacity of energy production in the country, and the data for its 2021 version, like the Energy Consumption Structure, is obtained from (EIA). Financial development (FND) is defined as the ratio of private credit to GDP, and to some extent based on the stock market value, also as a ratio to GDP, and is taken from the (WDI) 2021 version. Trade Openness (TRD) is the sum of exports and imports of goods and services measured as a share of GDP, and we use the (WDI) data (2021) version. Urbanization (URB) is defined as the share of the urban population in the total population, and for this data, we used the (WDI) data source (2021) version. The descriptive statistics of all the investigated variables are presented in the results section. Related to the theories of sustainable development, Environmental Kuznets Curve (EKC) is more significant in the period of study. In this research, we use more than fourteen targeted statistical regressions to purify the net effects of each of the approaches and examine the results.Keywords: climate change, globalization, environmental economics, sustainable development, international climate treaty
Procedia PDF Downloads 72464 Transformation of Periodic Fuzzy Membership Function to Discrete Polygon on Circular Polar Coordinates
Authors: Takashi Mitsuishi
Abstract:
Fuzzy logic has gained acceptance in the recent years in the fields of social sciences and humanities such as psychology and linguistics because it can manage the fuzziness of words and human subjectivity in a logical manner. However, the major field of application of the fuzzy logic is control engineering as it is a part of the set theory and mathematical logic. Mamdani method, which is the most popular technique for approximate reasoning in the field of fuzzy control, is one of the ways to numerically represent the control afforded by human language and sensitivity and has been applied in various practical control plants. Fuzzy logic has been gradually developing as an artificial intelligence in different applications such as neural networks, expert systems, and operations research. The objects of inference vary for different application fields. Some of these include time, angle, color, symptom and medical condition whose fuzzy membership function is a periodic function. In the defuzzification stage, the domain of the membership function should be unique to obtain uniqueness its defuzzified value. However, if the domain of the periodic membership function is determined as unique, an unintuitive defuzzified value may be obtained as the inference result using the center of gravity method. Therefore, the authors propose a method of circular-polar-coordinates transformation and defuzzification of the periodic membership functions in this study. The transformation to circular polar coordinates simplifies the domain of the periodic membership function. Defuzzified value in circular polar coordinates is an argument. Furthermore, it is required that the argument is calculated from a closed plane figure which is a periodic membership function on the circular polar coordinates. If the closed plane figure is continuous with the continuity of the membership function, a significant amount of computation is required. Therefore, to simplify the practice example and significantly reduce the computational complexity, we have discretized the continuous interval and the membership function in this study. In this study, the following three methods are proposed to decide the argument from the discrete polygon which the continuous plane figure is transformed into. The first method provides an argument of a straight line passing through the origin and through the coordinate of the arithmetic mean of each coordinate of the polygon (physical center of gravity). The second one provides an argument of a straight line passing through the origin and the coordinate of the geometric center of gravity of the polygon. The third one provides an argument of a straight line passing through the origin bisecting the perimeter of the polygon (or the closed continuous plane figure).Keywords: defuzzification, fuzzy membership function, periodic function, polar coordinates transformation
Procedia PDF Downloads 365