Search results for: interest flooding attack
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4768

Search results for: interest flooding attack

1618 Vertebral Artery Dissection Complicating Pregnancy and Puerperium: Case Report and Review of the Literature

Authors: N. Reza Pour, S. Chuah, T. Vo

Abstract:

Background: Vertebral artery dissection (VAD) is a rare complication of pregnancy. It can occur spontaneously or following a traumatic event. The pathogenesis is unclear. Predisposing factors include chronic hypertension, Marfan’s syndrome, fibromuscular dysplasia, vasculitis and cystic medial necrosis. Physiological changes of pregnancy have also been proposed as potential mechanisms of injury to the vessel wall. The clinical presentation varies and it can present as a headache, neck pain, diplopia, transient ischaemic attack, or an ischemic stroke. Isolated cases of VAD in pregnancy and puerperium have been reported in the literature. One case was found to have posterior circulation stroke as a result of bilateral VAD and labour was induced at 37 weeks gestation for preeclampsia. Another patient at 38 weeks with severe neck pain that persisted after induction for elevated blood pressure and arteriography showed right VAD postpartum. A single case of lethal VAD in pregnancy with subsequent massive subarachnoid haemorrhage has been reported which was confirmed by the autopsy. Case Presentation: We report two cases of vertebral artery dissection in pregnancy. The first patient was a 32-year-old primigravida presented at the 38th week of pregnancy with the onset of early labour and blood pressure (BP) of 130/70 on arrival. After 2 hours, the patient developed a severe headache with blurry vision and BP was 238/120. Despite treatment with an intravenous antihypertensive, she had eclamptic fit. Magnesium solfate was started and Emergency Caesarean Section was performed under the general anaesthesia. On the second day after the operation, she developed left-sided neck pain. Magnetic Resonance Imaging (MRI) angiography confirmed a short segment left vertebral artery dissection at the level of C3. The patient was treated with aspirin and remained stable without any neurological deficit. The second patient was a 33-year-old primigavida who was admitted to the hospital at 36 weeks gestation with BP of 155/105, constant headache and visual disturbances. She was medicated with an oral antihypertensive agent. On day 4, she complained of right-sided neck pain. MRI angiogram revealed a short segment dissection of the right vertebral artery at the C2-3 level. Pregnancy was terminated on the same day with emergency Caesarean Section and anticoagulation was started subsequently. Post-operative recovery was complicated by rectus sheath haematoma requiring evacuation. She was discharged home on Aspirin without any neurological sequelae. Conclusion: Because of collateral circulation, unilateral vertebral artery dissections may go unrecognized and may be more common than suspected. The outcome for most patients is benign, reflecting the adequacy of the collateral circulation in young patients. Spontaneous VAD is usually treated with anticoagulation or antiplatelet therapy for a minimum of 3-6 months to prevent future ischaemic events, allowing the dissection to heal on its own. We had two cases of VAD in the context of hypertensive disorders of pregnancy with an acceptable outcome. A high level of vigilance is required particularly with preeclamptic patients presenting with head/neck pain to allow an early diagnosis. This is as we hypothesize, early and aggressive management of vertebral artery dissection may potentially prevent further complications.

Keywords: eclampsia, preeclampsia, pregnancy, Vertebral Artery Dissection

Procedia PDF Downloads 279
1617 History and Its Significance in Modern Visual Graphic: Its Niche with Respect to India

Authors: Hemang Madhusudan Anglay, Akash Gaur

Abstract:

Value of visual perception in today’s context is vulnerable. Visual Graphic broadly and conveniently expresses culture, language and science of art that satisfactorily is a mould to cast various expressions. It is one of the essential parts of communication design which relatively can be used to approach the above areas of expressions. In between the receptors and interpreters, there is an expanse of comprehension and cliché in relation to the use of Visual Graphics. There are pedagogies, commodification and honest reflections where Visual Graphic is a common area of interest. The traditional receptors amidst the dilemma of this very situation find themselves in the pool of media, medium and interactions. Followed by a very vague interpretation the entire circle of communication becomes a question of comprehension vs cliché. Residing in the same ‘eco-system’ these communities who make pedagogies and multiply its reflections sometimes with honesty and sometimes on commercial values tend to function differently. With the advent of technology, which is a virtual space allows the user to access various forms of content. This diminishes the core characteristics and creates a vacuum even though it satisfies the user. The symbolic interpretation of visual form and structure is transmitted in a culture by the means of contemporary media. Starting from a very individualistic approach, today it is beyond Print & Electronic media. The expected outcome will be a study of Ahmedabad City, situated in the Gujarat State of India. It is identity with respect to socio-cultural as well as economic changes. The methodology will include process to understand the evolution and narratives behind it that will encompass diverse community, its reflection and it will sum up the salient features of communication through combination of visual and graphic that is relevant in Indian context trading its values to global scenario.

Keywords: communication, culture, graphic, visual

Procedia PDF Downloads 275
1616 Green Supply Chain Network Optimization with Internet of Things

Authors: Sema Kayapinar, Ismail Karaoglan, Turan Paksoy, Hadi Gokcen

Abstract:

Green Supply Chain Management is gaining growing interest among researchers and supply chain management. The concept of Green Supply Chain Management is to integrate environmental thinking into the Supply Chain Management. It is the systematic concept emphasis on environmental problems such as reduction of greenhouse gas emissions, energy efficiency, recycling end of life products, generation of solid and hazardous waste. This study is to present a green supply chain network model integrated Internet of Things applications. Internet of Things provides to get precise and accurate information of end-of-life product with sensors and systems devices. The forward direction consists of suppliers, plants, distributions centres and sales and collect centres while, the reverse flow includes the sales and collects centres, disassembled centre, recycling and disposal centre. The sales and collection centre sells the new products are transhipped from factory via distribution centre and also receive the end-of life product according their value level. We describe green logistics activities by presenting specific examples including “recycling of the returned products and “reduction of CO2 gas emissions”. The different transportation choices are illustrated between echelons according to their CO2 gas emissions. This problem is formulated as a mixed integer linear programming model to solve the green supply chain problems which are emerged from the environmental awareness and responsibilities. This model is solved by using Gams package program. Numerical examples are suggested to illustrate the efficiency of the proposed model.

Keywords: green supply chain optimization, internet of things, greenhouse gas emission, recycling

Procedia PDF Downloads 328
1615 Current Methods for Drug Property Prediction in the Real World

Authors: Jacob Green, Cecilia Cabrera, Maximilian Jakobs, Andrea Dimitracopoulos, Mark van der Wilk, Ryan Greenhalgh

Abstract:

Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials and to find highly active compounds faster. Interest from the machine learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods, thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and, therefore, cost of applying these methods in the drug development decision-making cycle. To the best of the author's knowledge, it has been observed that the optimal approach varies depending on the dataset and that engineered features with classical machine learning methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes, while ADMET datasets are sometimes better described by Trees or deep learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on and sets a precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction.

Keywords: activity (QSAR), ADMET, classical methods, drug property prediction, empirical study, machine learning

Procedia PDF Downloads 81
1614 Investigating Malaysian Prereader’s Cognitive Processes when Reading English Picture Storybooks: A Comparative Eye-Tracking Experiment

Authors: Siew Ming Thang, Wong Hoo Keat, Chee Hao Sue, Fung Lan Loo, Ahju Rosalind

Abstract:

There are numerous studies that explored young learners’ literacy skills in Malaysia but none that uses the eye-tracking device to track their cognitive processes when reading picture storybooks. This study used this method to investigate two groups of prereaders’ cognitive processes in four conditions. (1) A congruent picture was presented, and a matching narration was read aloud by a recorder; (2) Children heard a narration telling about the same characters in the picture but involves a different scene; (3) Only a picture with matching text was present; (4) Students only heard the reading aloud of the text on the screen. The two main objectives of this project are to test which content of pictures helps the prereaders (i.e., young children who have not received any formal reading instruction) understand the narration and whether children try to create a coherent mental representation from the oral narration and the pictures. The study compares two groups of children from two different kindergartens. Group1: 15 Chinese children; Group2: 17 Malay children. The medium of instruction was English. An eye-tracker were used to identify Areas of Interest (AOI) of each picture and the five target elements and calculate number of fixations and total time spent on fixation of pictures and written texts. Two mixed factorial ANOVAs with the storytelling performance (good, average, or weak) and vocabulary level (low, medium, high) as between-subject variables, and the Areas of Interests (AOIs) and display conditions as the within-subject variables were performedon the variables.

Keywords: eye-tracking, cognitive processes, literacy skills, prereaders, visual attention

Procedia PDF Downloads 95
1613 Allergenic Potential of Airborne Algae Isolated from Malaysia

Authors: Chu Wan-Loy, Kok Yih-Yih, Choong Siew-Ling

Abstract:

The human health risks due to poor air quality caused by a wide array of microorganisms have attracted much interest. Airborne algae have been reported as early as 19th century and they can be found in the air of tropic and warm atmospheres. Airborne algae normally originate from water surfaces, soil, trees, buildings and rock surfaces. It is estimated that at least 2880 algal cells are inhaled per day by human. However, there are relatively little data published on airborne algae and its related adverse health effects except sporadic reports of algae associated clinical allergenicity. A collection of airborne algae cultures has been established following a recent survey on the occurrence of airborne algae in indoor and outdoor environments in Kuala Lumpur. The aim of this study was to investigate the allergenic potential of the isolated airborne green and blue-green algae, namely Scenedesmus sp., Cylindrospermum sp. and Hapalosiphon sp.. The suspensions of freeze-dried airborne algae were adminstered into balb-c mice model through intra-nasal route to determine their allergenic potential. Results showed that Scenedesmus sp. (1 mg/mL) increased the systemic Ig E levels in mice by 3-8 fold compared to pre-treatment. On the other hand, Cylindrospermum sp. and Hapalosiphon sp. at similar concentration caused the Ig E to increase by 2-4 fold. The potential of airborne algae causing Ig E mediated type 1 hypersensitivity was elucidated using other immunological markers such as cytokine interleukin (IL)- 4, 5, 6 and interferon-ɣ. When we compared the amount of interleukins in mouse serum between day 0 and day 53 (day of sacrifice), Hapalosiphon sp. (1mg/mL) increased the expression of IL4 and 6 by 8 fold while the Cylindrospermum sp. (1mg/mL) increased the expression of IL4 and IFɣ by 8 and 2 fold respectively. In conclusion, repeated exposure to the three selected airborne algae may stimulate the immune response and generate Ig E in a mouse model.

Keywords: airborne algae, respiratory, allergenic, immune response, Malaysia

Procedia PDF Downloads 239
1612 Role of Chloride Ions on The Properties of Electrodeposited ZnO Nanostructures

Authors: L. Mentar, O. Baka, M. R. Khelladi, A. Azizi

Abstract:

Zinc oxide (ZnO), as a transparent semiconductor with a wide band gap of 3.4 eV and a large exciton binding energy of 60 meV at room temperature, is one of the most promising materials for a wide range of modern applications. With the development of film growth technologies and intense recent interest in nanotechnology, several varieties of ZnO nanostructured materials have been synthesized almost exclusively by thermal evaporation methods, particularly chemical vapor deposition (CVD), which generally require a high growth temperature above 550 °C. In contrast, wet chemistry techniques such as hydrothermal synthesis and electro-deposition are promising alternatives to synthesize ZnO nanostructures, especially at a significantly lower temperature (below 200°C). In this study, the electro-deposition method was used to produce zinc oxide (ZnO) nanostructures on fluorine-doped tin oxide (FTO)-coated conducting glass substrate from chloride bath. We present the influence of KCl concentrations on the electro-deposition process, morphological, structural and optical properties of ZnO nanostructures. The potentials of electro-deposition of ZnO were determined using the cyclic voltammetry. From the Mott-Schottky measurements, the flat-band potential and the donor density for the ZnO nanostructure are determined. Field emission scanning electron microscopy (FESEM) images showed different sizes and morphologies of the nanostructures which depends on the concentrations of Cl-. Very netted hexagonal grains are observed for the nanostructures deposited at 0.1M of KCl. X-ray diffraction (XRD) study confirms the Wurtzite phase of the ZnO nanostructures with a preferred oriented along (002) plane normal to the substrate surface. UV-Visible spectra showed a significant optical transmission (~80%), which decreased with low Cl-1 concentrations. The energy band gap values have been estimated to be between 3.52 and 3.80 eV.

Keywords: Cl-, electro-deposition, FESEM, Mott-Schottky, XRD, ZnO

Procedia PDF Downloads 289
1611 Analysis of Risks in Financing Agriculture a Case of Agricultural Cooperatives in Benue State, Nigeria

Authors: Odey Moses Ogah, Felix Terhemba Ikyereve

Abstract:

The study was carried out to analyzed risks in financing agriculture by agricultural cooperatives in Benue State, Nigeria. The study made use of research questionnaires for data collection. A multistage sampling technique was used to select a sample of 210 respondents from 21 agricultural cooperatives. Both descriptive and inferential statistics were employed in data analysis. Loan defaulting (66.7%) and reduction in savings by members (51.4%) were the major causes of risks faced by agricultural cooperatives in financing agriculture in the study area. Other causes include adverse changes in commodity prices (48.6%), disaster (45.7%), among others. It was found that risks adversely influence the profitability and competition of agricultural cooperatives (82.9%). Multiple regression analysis results showed that the coefficient of multiple determinations was 0.67, implying that the explanatory variables included in the model accounted for 67% of the variation in the level of profitability of agricultural cooperatives. The number of loans, average amount of loan and the interest rate were significant and important determinants of profitability of the cooperatives. The majority of the respondents (88.6%) made use of loan guarantors as a strategy of managing loan default/no repayment. It was found that the majority (70%) of the respondents were faced with the challenge of lack of insurance cover. The study recommends that agricultural cooperative officials should be encouraged to undergo formal training and education to easily acquire administrative skills in the management of agricultural loans; Farmer's loan size should be increased and released on time to enable them to use it effectively. Policies that enhance insuring farm activities should be put in place to discourage farmers from risk aversion.

Keywords: agriculture, analysis, cooperative, finance, risks

Procedia PDF Downloads 113
1610 Improvement of the Mechanical Behavior of an Environmental Concrete Based on Demolished

Authors: Larbi Belagraa

Abstract:

The universal need to conserve resources, protect the environment and use energy efficiently must necessarily be felt in the field of concrete technology. The recycling of construction and demolition waste as a source of aggregates for the production of concrete has attracted growing interest from the construction industry. In Algeria, the depletion of natural deposits of aggregates and the difficulties in setting up new quarries; makes it necessary to seek new sources of supply, to meet the need for aggregates for the major projects launched by the Algerian government in the last decades. In this context, this work is a part of the approach to provide answers to concerns about the lack of aggregates for concrete. It also aims to develop the inert fraction of demolition materials and mainly concrete construction demolition waste(C&D) as a source of aggregates for the manufacture of new hydraulic concretes based on recycled aggregates. This experimental study presents the results of physical and mechanical characterizations of natural and recycled aggregates, as well as their influence on the properties of fresh and hardened concrete. The characterization of the materials used has shown that the recycled aggregates have heterogeneity, a high water absorption capacity, and a medium quality hardness. However, the limits prescribed by the standards in force do not disqualify these materials of use for application as recycled aggregate concrete type (RAC). The results obtained from the present study show that acceptable mechanical, compressive, and flexural strengths of RACs are obtained using Superplasticizer SP 45 and 5% replacement of cement with silica fume based on recycled aggregates, compared to those of natural concretes. These mechanical performances demonstrate a characteristic resistance at 28 days in compression within the limits of 30 to 40 MPa without any particular suitable technology .to be adapted in the case.

Keywords: recycled aggregates, concrete(RAC), superplasticizer, silica fume, compressive strength

Procedia PDF Downloads 173
1609 Socio-Economic Influences on Soilless Agriculture

Authors: George Vernon Byrd, Bhim Bahadur Ghaley, Eri Hayashi

Abstract:

In urban farming, research and innovation are taking place at an unprecedented pace, and soilless growing technologies are emerging at different rates motivated by different objectives in various parts of the world. Local food production is ultimately a main objective everywhere, but adoption rates and expressions vary with socio-economic drivers. Herein, the status of hydroponics and aquaponics is summarized for four countries with diverse socio-economic settings: Europe (Denmark), Asia (Japan and Nepal) and North America (US). In Denmark, with a strong environmental ethic, soilless growing is increasing in urban agriculture because it is considered environmentally friendly. In Japan, soil-based farming is being replaced with commercial plant factories using advanced technology such as complete environmental control and computer monitoring. In Nepal, where rapid loss of agriculture land is occurring near cities, dozens of hydroponics and aquaponics systems have been built in the past decade, particularly in “non-traditional” sites such as roof tops to supplement family food. In the US, where there is also strong interest in locally grown fresh food, backyard and commercial systems have proliferated. Nevertheless, soilless growing is still in the research and development and early adopter stages, and the broad contribution of hydroponics and aquaponics to food security is yet to be fully determined. Nevertheless, current adoption of these technologies in diverse environments in different socio-economic settings highlights the potential contribution to food security with social and environmental benefits which contribute to several Sustainable Development Goals.

Keywords: aquaponics, hydroponics, soilless agriculture, urban agriculture

Procedia PDF Downloads 97
1608 The Shannon Entropy and Multifractional Markets

Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese

Abstract:

Introduced by Shannon in 1948 in the field of information theory as the average rate at which information is produced by a stochastic set of data, the concept of entropy has gained much attention as a measure of uncertainty and unpredictability associated with a dynamical system, eventually depicted by a stochastic process. In particular, the Shannon entropy measures the degree of order/disorder of a given signal and provides useful information about the underlying dynamical process. It has found widespread application in a variety of fields, such as, for example, cryptography, statistical physics and finance. In this regard, many contributions have employed different measures of entropy in an attempt to characterize the financial time series in terms of market efficiency, market crashes and/or financial crises. The Shannon entropy has also been considered as a measure of the risk of a portfolio or as a tool in asset pricing. This work investigates the theoretical link between the Shannon entropy and the multifractional Brownian motion (mBm), stochastic process which recently is the focus of a renewed interest in finance as a driving model of stochastic volatility. In particular, after exploring the current state of research in this area and highlighting some of the key results and open questions that remain, we show a well-defined relationship between the Shannon (log)entropy and the memory function H(t) of the mBm. In details, we allow both the length of time series and time scale to change over analysis to study how the relation modify itself. On the one hand, applications are developed after generating surrogates of mBm trajectories based on different memory functions; on the other hand, an empirical analysis of several international stock indexes, which confirms the previous results, concludes the work.

Keywords: Shannon entropy, multifractional Brownian motion, Hurst–Holder exponent, stock indexes

Procedia PDF Downloads 110
1607 The Potential Effect of Sexual Selection on the Distal Genitalia Variability of the Simultaneously Hermaphroditic Land Snail Helix aperta in Bejaia/Kabylia/Algeria

Authors: Benbellil-Tafoughalt Saida, Tababouchet Meriem

Abstract:

Sexual selection is the most supported explanation for genital extravagance occurring in animals. In promiscuous species, population density, as well as climate conditions, may act on the sperm competition intensity, one of the most important mechanism of post-copulatory sexual selection. The present study is empirical testing of sexual selection's potential role on genitalia variation in the simultanuously hermaphroditic land snail Helixaperta (Pulmonata, Stylommatophora). The purpose was to detect the patterns as well as the origin of the distal genitalia variability and especially to test the potential effect of sexual selection. The study was performed on four populations, H. aperta, different in habitat humidity regimes and presenting variable densities, which were mostly low. The organs of interest were those involved in spermatophore production, reception, and manipulation. We examined whether the evolution of those organs is connected to sperm competition intensity which is traduced by both population density and microclimate humidity. We also tested the hypothesis that those organs evolve in response to shell size. The results revealed remarkable differences in both snails’ size and organs lengths between populations. In most cases, the length of genitalia correlated positively to snails’ body size. Interestingly, snails from the more humid microclimate presented the highest mean weight and shell dimensions comparing to those from the less humid microclimate. However, we failed to establish any relation between snail densities and any of the measured genitalia traits.

Keywords: fertilization pouch, helix aperta, land snails, reproduction, sperm storage, spermatheca

Procedia PDF Downloads 189
1606 Replicating Brain’s Resting State Functional Connectivity Network Using a Multi-Factor Hub-Based Model

Authors: B. L. Ho, L. Shi, D. F. Wang, V. C. T. Mok

Abstract:

The brain’s functional connectivity while temporally non-stationary does express consistency at a macro spatial level. The study of stable resting state connectivity patterns hence provides opportunities for identification of diseases if such stability is severely perturbed. A mathematical model replicating the brain’s spatial connections will be useful for understanding brain’s representative geometry and complements the empirical model where it falls short. Empirical computations tend to involve large matrices and become infeasible with fine parcellation. However, the proposed analytical model has no such computational problems. To improve replicability, 92 subject data are obtained from two open sources. The proposed methodology, inspired by financial theory, uses multivariate regression to find relationships of every cortical region of interest (ROI) with some pre-identified hubs. These hubs acted as representatives for the entire cortical surface. A variance-covariance framework of all ROIs is then built based on these relationships to link up all the ROIs. The result is a high level of match between model and empirical correlations in the range of 0.59 to 0.66 after adjusting for sample size; an increase of almost forty percent. More significantly, the model framework provides an intuitive way to delineate between systemic drivers and idiosyncratic noise while reducing dimensions by more than 30 folds, hence, providing a way to conduct attribution analysis. Due to its analytical nature and simple structure, the model is useful as a standalone toolkit for network dependency analysis or as a module for other mathematical models.

Keywords: functional magnetic resonance imaging, multivariate regression, network hubs, resting state functional connectivity

Procedia PDF Downloads 151
1605 A Review of Deep Learning Methods in Computer-Aided Detection and Diagnosis Systems based on Whole Mammogram and Ultrasound Scan Classification

Authors: Ian Omung'a

Abstract:

Breast cancer remains to be one of the deadliest cancers for women worldwide, with the risk of developing tumors being as high as 50 percent in Sub-Saharan African countries like Kenya. With as many as 42 percent of these cases set to be diagnosed late when cancer has metastasized and or the prognosis has become terminal, Full Field Digital [FFD] Mammography remains an effective screening technique that leads to early detection where in most cases, successful interventions can be made to control or eliminate the tumors altogether. FFD Mammograms have been proven to multiply more effective when used together with Computer-Aided Detection and Diagnosis [CADe] systems, relying on algorithmic implementations of Deep Learning techniques in Computer Vision to carry out deep pattern recognition that is comparable to the level of a human radiologist and decipher whether specific areas of interest in the mammogram scan image portray abnormalities if any and whether these abnormalities are indicative of a benign or malignant tumor. Within this paper, we review emergent Deep Learning techniques that will prove relevant to the development of State-of-The-Art FFD Mammogram CADe systems. These techniques will span self-supervised learning for context-encoded occlusion, self-supervised learning for pre-processing and labeling automation, as well as the creation of a standardized large-scale mammography dataset as a benchmark for CADe systems' evaluation. Finally, comparisons are drawn between existing practices that pre-date these techniques and how the development of CADe systems that incorporate them will be different.

Keywords: breast cancer diagnosis, computer aided detection and diagnosis, deep learning, whole mammogram classfication, ultrasound classification, computer vision

Procedia PDF Downloads 93
1604 Experiences of Trainee Teachers: A Survey on Expectations and Realities in Special Secondary Schools in Kenya

Authors: Mary Cheptanui Sambu

Abstract:

Teaching practice is an integral component of students who are training to be teachers, as it provides them with an opportunity to gain experience in an actual teaching and learning environment. This study explored the experiences of trainee teachers from a local university in Kenya, undergoing a three-month teaching practice in Special Secondary schools in the country. The main aim of the study was to understand the trainees’ experiences, their expectations, and the realities encountered during the teaching practice period. The study focused on special secondary schools for learners with hearing impairment. A descriptive survey design was employed and a sample size of forty-four respondents from special secondary schools for learners with hearing impairment was purposively selected. A questionnaire was administered to the respondents and the data obtained analysed using the Statistical Package for the Social Sciences (SPSS). Preliminary analysis shows that challenges facing special secondary schools include inadequate teaching and learning facilities and resources, low academic performance among learners with hearing impairment, an overloaded curriculum and inadequate number of teachers for the learners. The study findings suggest that the Kenyan government should invest more in the education of special needs children, particularly focusing on increasing the number of trained teachers. In addition, the education curriculum offered in special secondary schools should be tailored towards the needs and interest of learners. These research findings will be useful to policymakers and curriculum developers, and will provide information that can be used to enhance the education of learners with hearing impairment; this will lead to improved academic performance, consequently resulting in better transitions and the realization of Vision 2030.

Keywords: hearing impairment, special secondary schools, trainee, teaching practice

Procedia PDF Downloads 163
1603 Effects of Watershed Erosion on Stream Channel Formation

Authors: Tiao Chang, Ivan Caballero, Hong Zhou

Abstract:

Streams carry water and sediment naturally by maintaining channel dimensions, pattern, and profile over time. Watershed erosion as a natural process has occurred to contribute sediment to streams over time. The formation of channel dimensions is complex. This study is to relate quantifiable and consistent channel dimensions at the bankfull stage to the corresponding watershed erosion estimation by the Revised Universal Soil Loss Equation (RUSLE). Twelve sites of which drainage areas range from 7 to 100 square miles in the Hocking River Basin of Ohio were selected for the bankfull geometry determinations including width, depth, cross-section area, bed slope, and drainage area. The twelve sub-watersheds were chosen to obtain a good overall representation of the Hocking River Basin. It is of interest to determine how these bankfull channel dimensions are related to the soil erosion of corresponding sub-watersheds. Soil erosion is a natural process that has occurred in a watershed over time. The RUSLE was applied to estimate erosions of the twelve selected sub-watersheds where the bankfull geometry measurements were conducted. These quantified erosions of sub-watersheds are used to investigate correlations with bankfull channel dimensions including discharge, channel width, channel depth, cross-sectional area, and pebble distribution. It is found that drainage area, bankfull discharge and cross-sectional area correlates strongly with watershed erosion well. Furthermore, bankfull width and depth are moderately correlated with watershed erosion while the particle size, D50, of channel bed sediment is not well correlated with watershed erosion.

Keywords: watershed, stream, sediment, channel

Procedia PDF Downloads 287
1602 Kou Jump Diffusion Model: An Application to the SP 500; Nasdaq 100 and Russell 2000 Index Options

Authors: Wajih Abbassi, Zouhaier Ben Khelifa

Abstract:

The present research points towards the empirical validation of three options valuation models, the ad-hoc Black-Scholes model as proposed by Berkowitz (2001), the constant elasticity of variance model of Cox and Ross (1976) and the Kou jump-diffusion model (2002). Our empirical analysis has been conducted on a sample of 26,974 options written on three indexes, the S&P 500, Nasdaq 100 and the Russell 2000 that were negotiated during the year 2007 just before the sub-prime crisis. We start by presenting the theoretical foundations of the models of interest. Then we use the technique of trust-region-reflective algorithm to estimate the structural parameters of these models from cross-section of option prices. The empirical analysis shows the superiority of the Kou jump-diffusion model. This superiority arises from the ability of this model to portray the behavior of market participants and to be closest to the true distribution that characterizes the evolution of these indices. Indeed the double-exponential distribution covers three interesting properties that are: the leptokurtic feature, the memory less property and the psychological aspect of market participants. Numerous empirical studies have shown that markets tend to have both overreaction and under reaction over good and bad news respectively. Despite of these advantages there are not many empirical studies based on this model partly because probability distribution and option valuation formula are rather complicated. This paper is the first to have used the technique of nonlinear curve-fitting through the trust-region-reflective algorithm and cross-section options to estimate the structural parameters of the Kou jump-diffusion model.

Keywords: jump-diffusion process, Kou model, Leptokurtic feature, trust-region-reflective algorithm, US index options

Procedia PDF Downloads 429
1601 Awareness among Medical Students and Faculty about Integration of Artifical Intelligence Literacy in Medical Curriculum

Authors: Fatima Faraz

Abstract:

BACKGROUND: While Artificial intelligence (AI) provides new opportunities across a wide variety of industries, healthcare is no exception. AI can lead to advancements in how the healthcare system functions and improves the quality of patient care. Developing countries like Pakistan are lagging in the implementation of AI-based solutions in healthcare. This demands increased knowledge and AI literacy among health care professionals. OBJECTIVES: To assess the level of awareness among medical students and faculty about AI in preparation for teaching AI basics and data science applications in clinical practice in an integrated medical curriculum. METHODS: An online 15-question semi-structured questionnaire, previously tested and validated, was delivered among participants through convenience sampling. The questionnaire composed of 3 parts: participant’s background knowledge, AI awareness, and attitudes toward AI applications in medicine. RESULTS: A total of 182 students and 39 faculty members from Rawalpindi Medical University, Pakistan, participated in the study. Only 26% of students and 46.2% of faculty members responded that they were aware of AI topics in clinical medicine. The major source of AI knowledge was social media (35.7%) for students and professional talks and colleagues (43.6%) for faculty members. 23.5% of participants answered that they personally had a basic understanding of AI. Students and faculty (60.1%) were interested in AI in patient care and teaching domain. These findings parallel similar published AI survey results. CONCLUSION: This survey concludes interest among students and faculty in AI developments and technology applications in healthcare. Further studies are required in order to correctly fit AI in the integrated modular curriculum of medical education.

Keywords: medical education, data science, artificial intelligence, curriculum

Procedia PDF Downloads 101
1600 Electrochemical/Electro-Catalytic Applications of Novel Alcohol Substituted Metallophthalocyanines

Authors: Ipek Gunay, Efe B. Orman, Metin Ozer, Bekir Salih, Ali R. Ozkaya

Abstract:

Phthalocyanines with macrocyclic ring containing at least three heteroatoms have nine or more membered structures. Metal-free phthalocyanines react with metal salts to obtain chelate complexes. This is one of the most important features of metal-free phthalocyanine as ligand structure. Although phthalocyanines have very similar properties with porphyrins, they have some advantages such as lower cost, easy to prepare, and chemical and thermal stability. It’s known that Pc compounds have shown one-electron metal-and/or ligand-based reversible or quasi-reversible reduction and oxidation processes. The redox properties of phthalocyanines are critically related to the desirable properties of these compounds in their technological applications. Thus, Pc complexes have also been receiving increasing interest in the area of fuel cells due to their high electrocatalytic activity in dioxygen reduction and fuel cell applications. In this study, novel phthalocyanine complexes coordinated with Fe(II) and Co (II) to be used as catalyst were synthesized. Aiming this goal, a new nitrile ligand was synthesized starting from 4-hydroxy-3,5-dimethoxy benzyl alcohol and 4-nitrophthalonitrile in the presence of K2CO3 as catalyst. After the isolation of the new type of nitrile and metal complexes, the characterization of mentioned compounds was achieved by IR, H-NMR and UV-vis methods. In addition, the electrochemical behaviour of Pc complexes was identified by cyclic voltammetry, square wave voltammetry and in situ spectroelectrochemical measurements. Furthermore, the catalytic performances of Pc complexes for oxygen reduction were tested by dynamic voltammetry measurements, carried out by the combined system of rotating ring-disk electrode and potentiostat, in a medium similar to fuel-cell working conditions.

Keywords: phthalocyanine, electrocatalysis, electrochemistry, in-situ spectroelectrochemistry

Procedia PDF Downloads 316
1599 Exploring the Role of Building Information Modeling for Delivering Successful Construction Projects

Authors: Muhammad Abu Bakar Tariq

Abstract:

Construction industry plays a crucial role in the progress of societies and economies. Furthermore, construction projects have social as well as economic implications, thus, their success/failure have wider impacts. However, the industry is lagging behind in terms of efficiency and productivity. Building Information Modeling (BIM) is recognized as a revolutionary development in Architecture, Engineering and Construction (AEC) industry. There are numerous interest groups around the world providing definitions of BIM, proponents describing its advantages and opponents identifying challenges/barriers regarding adoption of BIM. This research is aimed at to determine what actually BIM is, along with its potential role in delivering successful construction projects. The methodology is critical analysis of secondary data sources i.e. information present in public domain, which include peer reviewed journal articles, industry and government reports, conference papers, books, case studies etc. It is discovered that clash detection and visualization are two major advantages of BIM. Clash detection option identifies clashes among structural, architectural and MEP designs before construction actually commences, which subsequently saves time as well as cost and ensures quality during execution phase of a project. Visualization is a powerful tool that facilitates in rapid decision-making in addition to communication and coordination among stakeholders throughout project’s life cycle. By eliminating inconsistencies that consume time besides cost during actual construction, improving collaboration among stakeholders throughout project’s life cycle, BIM can play a positive role to achieve efficiency and productivity that consequently deliver successful construction projects.

Keywords: building information modeling, clash detection, construction project success, visualization

Procedia PDF Downloads 260
1598 Novel Bioinspired Design to Capture Smoky CO2 by Reactive Absorption with Aqueous Scrubber

Authors: J. E. O. Hernandez

Abstract:

In the next 20 years, energy production by burning fuels will increase and so will the atmospheric concentration of CO2 and its well-known threats to life on Earth. The technologies available for capturing CO2 are still dubious and this keeps fostering an interest in bio-inspired approaches. The leading one is the application of carbonic anhydrase (CA) –a superfast biocatalyst able to convert up to one million molecules of CO2 into carbonates in water. However, natural CA underperforms when applied to real smoky CO2 in chimneys and, so far, the efforts to create superior CAs in the lab rely on screening methods running under pristine conditions at the micro level, which are far from resembling those in chimneys. For the evolution of man-made enzymes, selection rather than screening would be ideal but this is challenging because of the need for a suitable artificial environment that is also sustainable for our society. Herein we present the stepwise design and construction of a bioprocess (from bench-scale to semi-pilot) for evolutionary selection experiments. In this bioprocess, reaction and adsorption took place simultaneously at atmospheric pressure in a spray tower. The scrubbing solution was fed countercurrently by reusing municipal pressure and it was mainly prepared with water, carbonic anhydrase and calcium chloride. This bioprocess allowed for the enzymatic carbonation of smoky CO2; the reuse of process water and the recovery of solid carbonates without cooling of smoke, pretreatments, solvent amines and compression of CO2. The average yield of solid carbonates was 0.54 g min-1 or 12-fold the amount produced in serum bottles at lab bench scale. This bioprocess could be used as a tailor-made environment for driving the selection of superior CAs. The bioprocess and its match CA could be sustainably used to reduce global warming by CO2 emissions from exhausts.

Keywords: biological carbon capture and sequestration, carbonic anhydrase, directed evolution, global warming

Procedia PDF Downloads 193
1597 Quality of Life for Families with Children/Youth with Autism Spectrum Disorder

Authors: José Nogueira

Abstract:

This research aims to analyze the impact of autism spectrum disorders (ASD) in families with children and youth (0-25 years) with ASD in Portugal. The impact will be evaluated on a multidimensional perspective, following the work on the concept of quality life from WHOQOL Group (UN). The study includes quantitative and qualitative methodology. It correlates statistical sources and other information with the data obtained through a survey of a sample of about 100 families with children/youth with ASD (October and November 2013). The results indicate a strong impact of autism on the quality of life for families in all study dimensions. The research shows a negative impact on quality of life for families in material and financial conditions, physical and emotional well-being, career progression, feelings of injustice, social participation and self-perception of happiness. The quality of life remained in the relationship with the family and the spouse, interpersonal relationships and beliefs about himself. The ASD improved the quality of life aspects such as interest, knowledge and exercise of rights on disability, autonomy to make decisions and be able to deal with stress. Other dimensions are contemplated: a detailed characterization of the child/young with ASD and all family members (household composition, relationship status, academic qualifications, occupation, income, and leisure) the impact of diagnosis in the family wellbeing, medical and therapeutic processes, school inclusion, public support, social participation, and the adequacy and implementation of legislation. The study evaluates also the strengths and weaknesses of the Portuguese public rehabilitation system and demonstrates how a good law-in-theory may not solve the problems of families in practice due to the allocation of insufficient public resources, both financial and human resources.

Keywords: autism, families, quality of life, autism spectrum disorder

Procedia PDF Downloads 357
1596 Milk Protein Genetic Variation and Haplotype Structure in Sudanse Indigenous Dairy Zebu Cattle

Authors: Ammar Said Ahmed, M. Reissmann, R. Bortfeldt, G. A. Brockmann

Abstract:

Milk protein genetic variants are of interest for characterizing domesticated mammalian species and breeds, and for studying associations with economic traits. The aim of this work was to analyze milk protein genetic variation in the Sudanese native cattle breeds, which have been gradually declining in numbers over the last years due to the breed substitution, and indiscriminate crossbreeding. The genetic variation at three milk protein genes αS1-casein (CSN1S1), αS2-casein (CSN1S2) and ƙ-casein (CSN3) was investigated in 250 animals belonging to five Bos indicus cattle breeds of Sudan (Butana, Kenana, White-nile, Erashy and Elgash). Allele specific primers were designed for five SNPs determine the CSN1S1 variants B and C, the CSN1S2 variants A and B, the CSN3 variants A, B and H. Allele, haplotype frequencies and genetic distances (D) were calculated and the phylogenetic tree was constructed. All breeds were found to be polymorphic for the studied genes. The CSN1S1*C variant was found very frequently (>0.63) in all analyzed breeds with highest frequency (0.82) in White-nile cattle. The CSN1S2*A variant (0.77) and CSN3*A variant (0.79) had highest frequency in Kenana cattle. Eleven haplotypes in casein gene cluster were inferred. Six of all haplotypes occurred in all breeds with remarkably deferent frequencies. The estimated D ranged from 0.004 to 0.049. The most distant breeds were White-nile and Kenana (D 0.0479). The results presented contribute to the genetic knowledge of indigenous cattle and can be used for proper definition and classification of the Sudanese cattle breeds as well as breeding, utilization, and potential development of conservation strategies for local breeds.

Keywords: milk protein, genetic variation, casein haplotype, Bos indicus

Procedia PDF Downloads 437
1595 The Fefe Indices: The Direction of Donal Trump’s Tweets Effect on the Stock Market

Authors: Sergio Andres Rojas, Julian Benavides Franco, Juan Tomas Sayago

Abstract:

An increasing amount of research demonstrates how market mood affects financial markets, but their primary goal is to demonstrate how Trump's tweets impacted US interest rate volatility. Following that lead, this work evaluates the effect that Trump's tweets had during his presidency on local and international stock markets, considering not just volatility but the direction of the movement. Three indexes for Trump's tweets were created relating his activity with movements in the S&P500 using natural language analysis and machine learning algorithms. The indexes consider Trump's tweet activity and the positive or negative market sentiment they might inspire. The first explores the relationship between tweets generating negative movements in the S&P500; the second explores positive movements, while the third explores the difference between up and down movements. A pseudo-investment strategy using the indexes produced statistically significant above-average abnormal returns. The findings also showed that the pseudo strategy generated a higher return in the local market if applied to intraday data. However, only a negative market sentiment caused this effect on daily data. These results suggest that the market reacted primarily to a negative idea reflected in the negative index. In the international market, it is not possible to identify a pervasive effect. A rolling window regression model was also performed. The result shows that the impact on the local and international markets is heterogeneous, time-changing, and differentiated for the market sentiment. However, the negative sentiment was more prone to have a significant correlation most of the time.

Keywords: market sentiment, Twitter market sentiment, machine learning, natural dialect analysis

Procedia PDF Downloads 63
1594 Analysis on South Korean Early Childhood Education Teachers’ Stage of Concerns about Software Education According to the Concern-Based Adoption Model

Authors: Sun-Mi Park, Ji-Hyun Jung, Min-Jung Kang

Abstract:

Software (SW) education is scheduled to be included in the National curriculum in South Korea by 2018. However, Korean national kindergarten curriculum has been excepted from the revision of the entire Korean national school curriculum including software education. Even though the SW education has not been considered a part of current national kindergarten curriculum, there is a growing interest of adopting software education into the ECE practice. Teachers might be a key element in introducing and implementing new educational change such as SW education. In preparation for the adoption of SW education in ECE, it might be necessary to figure out ECE teachers’ perception and attitudes toward early childhood software education. For this study, 219 ECE teachers’ concern level in SW education was surveyed by using the Stages of Concern Questionnaire (SoCQ). As a result, the teachers' concern level in SW education is the highest at stage 0-Unconcerned and is high level in stage 1-informational, stage 2-personal, and stage 3-management concern. Thus, a non-user pattern was mostly indicated. However, compared to a typical non-user pattern, the personal and informative concern level is slightly high. The 'tailing up' phenomenon toward stage 6-refocusing was shown. Therefore, the pattern aspect close to critical non-user ever appeared to some extent. In addition, a significant difference in concern level was shown at all stages depending on the awareness of necessity. Teachers with SW training experience showed higher intensity only at stage 0. There was statistically significant difference in stage 0 and 6 depending on the future implementation decision. These results will be utilized as a resource in building ECE teachers’ support system according to his or her concern level of SW education.

Keywords: concerns-based adoption model (CBAM), early childhood education teachers, software education, Stages of Concern (SoC)

Procedia PDF Downloads 207
1593 Synthesis and Characterization of Chiral Dopant Based on Schiff's Base Structure

Authors: Hong-Min Kim, Da-Som Han, Myong-Hoon Lee

Abstract:

CLCs (Cholesteric liquid crystals) draw tremendous interest due to their potential in various applications such as cholesteric color filters in LCD devices. CLC possesses helical molecular orientation which is induced by a chiral dopant molecules mixed with nematic liquid crystals. The efficiency of a chiral dopant is quantified by the HTP (helical twisting power). In this work, we designed and synthesized a series of new chiral dopants having a Schiff’s base imine structure with different alkyl chain lengths (butyl, hexyl and octyl) from chiral naphthyl amine by two-step reaction. The structures of new chiral dopants were confirmed by 1H-NMR and IR spectroscopy. The properties were investigated by DSC (differential scanning calorimetry calorimetry), POM (polarized optical microscopy) and UV-Vis spectrophotometer. These solid state chiral dopants showed excellent solubility in nematic LC (MLC-6845-000) higher than 17wt%. We prepared the CLC(Cholesteric Liquid Crystal) cell by mixing nematic LC (MLC-6845-000) with different concentrations of chiral dopants and injecting into the sandwich cell of 5μm cell gap with antiparallel alignment. The cholesteric liquid crystal phase was confirmed from POM, in which all the samples showed planar phase, a typical phase of the cholesteric liquid crystals. The HTP (helical twisting power) is one of the most important properties of CLC. We measured the HTP values from the UV-Vis transmittance spectra of CLC cells with varies chiral dopant concentration. The HTP values with different alkyl chains are as follows: butyl chiral dopant=29.8μm-1; hexyl chiral dopant= 31.8μm-1; octyl chiral dopant=27.7μm-1. We obtained the red, green and blue reflection color from CLC cells, which can be used as color filters in LCDs applications.

Keywords: cholesteric liquid crystal, color filter, display, HTP

Procedia PDF Downloads 267
1592 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI

Authors: Ananya Ananya, Karthik Rao

Abstract:

Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.

Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net

Procedia PDF Downloads 261
1591 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 107
1590 Driving Forces of Bank Liquidity: Evidence from Selected Ethiopian Private Commercial Banks

Authors: Tadele Tesfay Teame, Tsegaye Abrehame, Hágen István Zsombor

Abstract:

Liquidity is one of the main concerns for banks, and thus achieving the optimum level of liquidity is critical. The main objective of this study is to discover the driving force of selected private commercial banks’ liquidity. In order to achieve the objective explanatory research design and quantitative research approach were used. Data has been collected from a secondary source of the sampled Ethiopian private commercial banks’ financial statements, the National Bank of Ethiopia, and the Minister of Finance, the sample covering the period from 2011 to 2022. Bank-specific and macroeconomic variables were analyzed by using the balanced panel fixed effect regression model. Bank’s liquidity ratio is measured by the total liquid asset to total deposits. The findings of the study revealed that bank size, capital adequacy, loan growth rate, and non-performing loan had a statistically significant impact on private commercial banks’ liquidity, and annual inflation rate and interest rate margin had a statistically significant impact on the liquidity of Ethiopian private commercial banks measured by L1 (bank liquidity). Thus, banks in Ethiopia should not only be concerned about internal structures and policies/procedures, but they must consider both the internal environment and the macroeconomic environment together in developing their strategies to efficiently manage their liquidity position and private commercial banks to maintain their financial proficiency shall have bank liquidity management policy by assimilating both bank-specific and macro-economic variables.

Keywords: liquidity, Ethiopian private commercial banks, liquidity ratio, panel data regression analysis

Procedia PDF Downloads 99
1589 Effect of Islamic Finance on Jobs Generation in Punjab, Pakistan

Authors: B. Ashraf, A. M. Malik

Abstract:

The study was accomplished at the Department of Economics and Agriculture Economics, Pir Mahar Ali Shah ARID Agriculture University, Punjab, Pakistan during 2013-16 with a purpose to discover the effect of Islamic finance/banking on employment in Punjab, Pakistan. Islamic banking system is sub-component of conventional banking system in various countries of the world; however, in Pakistan, it has been established as a separate Islamic banking system. The Islamic banking operates under the doctrine of Shariah. It is claimed that the referred banking is free of interest (Riba) and addresses the philosophy and basic values of Islam in finance that reduces the factors of uncertainty, risk and others speculative activities. Two Islamic bank’s; Meezan Bank Limited (Pakistan) and Al-Baraka Bank Limited (Pakistan) from North Punjab (Bahawalnagar) and central Punjab (Lahore) west Punjab (Gujrat), Pakistan were randomly selected for the conduct of research. A total of 206 samples were collected from the define areas and banks through questionnaire. The data was analyzed by using the Statistical Package for Social Sciences (SPSS) version 21.0. Multiple linear regressions were applied to prove the hypothesis. The results revealed that the assets formation had significant positive; whereas, the technology, length of business (experience) and bossiness size had significant negative impact with employment generation in Islamic finance/banking in Punjab, Pakistan. This concludes that the employment opportunities may be created in the country by extending the finance to business/firms to start new business and increase the Public awareness by the Islamic banks through intensive publicity. However; Islamic financial institutions may be encouraged by Government as it enhances the employment in the country.

Keywords: assets formation, borrowers, employment generation, Islamic banks, Islamic finance

Procedia PDF Downloads 325