Search results for: aerial targets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1317

Search results for: aerial targets

147 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 351
146 Team Teaching versus Traditional Pedagogical Method

Authors: L. M. H. Mustonen, S. A. Heikkilä

Abstract:

The focus of the paper is to describe team teaching as a HAMK’s pedagogical method, and its impacts to the teachers work. Background: Traditionally it is thought that teaching is a job where one mostly works alone. More and more teachers feel that their work is getting more stressful. Solutions to these problems have been sought in Häme University of Applied sciences’ (From now on referred to as HAMK). HAMK has made a strategic change to move to the group oriented working of teachers. Instead of isolated study courses, there are now larger 15 credits study modules. Implementation: As examples of the method, two cases are presented: technical project module and summer studies module, which was integrated into the EU development project called Energy Efficiency with Precise Control. In autumn 2017, technical project will be implemented third time. There are at least three teachers involved in it and it is the first module of the new students. Main focus is to learn the basic skills of project working. From communicational viewpoint, they learn the basics of written and oral reporting and the basics of video reporting skills. According to our quality control system, the need for the development is evaluated in the end of the module. There are always some differences in each implementation but the basics are the same. The other case summer studies 2017 is new and part of a larger EU project. For the first time, we took a larger group of first to third year students from different study programmes to the summer studies. The students learned professional skills and also skills from different fields of study, international cooperation, and communication skills. Benefits and challenges: After three years, it is possible to consider what the changes mean in the everyday work of the teachers - and of course – what it means to students and the learning process. The perspective is HAMK’s electrical and automation study programme: At first, the change always means more work. The routines born after many years and the course material used for years may not be valid anymore. Teachers are teaching in modules simultaneously and often with some subjects overlapping. Finding the time to plan the modules together is often difficult. The essential benefit is that the learning outcomes have improved. This can be seen in the feedback given by both the teachers and the students. Conclusions: A new type of working environment is being born. A team of teachers designs a module that matches the objectives and ponders the answers to such questions as what are the knowledge-based targets of the module? Which pedagogical solutions will achieve the desired results? At what point do multiple teachers instruct the class together? How is the module evaluated? How can the module be developed further for the next execution? The team discusses openly and finds the solutions. Collegiate responsibility and support are always present. These are strengthening factors of the new communal university teaching culture. They are also strong sources of pleasure of work.

Keywords: pedagogical development, summer studies, team teaching, well-being at work

Procedia PDF Downloads 109
145 High Throughput Virtual Screening against ns3 Helicase of Japanese Encephalitis Virus (JEV)

Authors: Soma Banerjee, Aamen Talukdar, Argha Mandal, Dipankar Chaudhuri

Abstract:

Japanese Encephalitis is a major infectious disease with nearly half the world’s population living in areas where it is prevalent. Currently, treatment for it involves only supportive care and symptom management through vaccination. Due to the lack of antiviral drugs against Japanese Encephalitis Virus (JEV), the quest for such agents remains a priority. For these reasons, simulation studies of drug targets against JEV are important. Towards this purpose, docking experiments of the kinase inhibitors were done against the chosen target NS3 helicase as it is a nucleoside binding protein. Previous efforts regarding computational drug design against JEV revealed some lead molecules by virtual screening using public domain software. To be more specific and accurate regarding finding leads, in this study a proprietary software Schrödinger-GLIDE has been used. Druggability of the pockets in the NS3 helicase crystal structure was first calculated by SITEMAP. Then the sites were screened according to compatibility with ATP. The site which is most compatible with ATP was selected as target. Virtual screening was performed by acquiring ligands from databases: KinaseSARfari, KinaseKnowledgebase and Published inhibitor Set using GLIDE. The 25 ligands with best docking scores from each database were re-docked in XP mode. Protein structure alignment of NS3 was performed using VAST against MMDB, and similar human proteins were docked to all the best scoring ligands. The low scoring ligands were chosen for further studies and the high scoring ligands were screened. Seventy-three ligands were listed as the best scoring ones after performing HTVS. Protein structure alignment of NS3 revealed 3 human proteins with RMSD values lesser than 2Å. Docking results with these three proteins revealed the inhibitors that can interfere and inhibit human proteins. Those inhibitors were screened. Among the ones left, those with docking scores worse than a threshold value were also removed to get the final hits. Analysis of the docked complexes through 2D interaction diagrams revealed the amino acid residues that are essential for ligand binding within the active site. Interaction analysis will help to find a strongly interacting scaffold among the hits. This experiment yielded 21 hits with the best docking scores which could be investigated further for their drug like properties. Aside from getting suitable leads, specific NS3 helicase-inhibitor interactions were identified. Selection of Target modification strategies complementing docking methodologies which can result in choosing better lead compounds are in progress. Those enhanced leads can lead to better in vitro testing.

Keywords: antivirals, docking, glide, high-throughput virtual screening, Japanese encephalitis, ns3 helicase

Procedia PDF Downloads 230
144 The Use of Modern Technologies and Computers in the Archaeological Surveys of Sistan in Eastern Iran

Authors: Mahyar MehrAfarin

Abstract:

The Sistan region in eastern Iran is a significant archaeological area in Iran and the Middle East, encompassing 10,000 square kilometers. Previous archeological field surveys have identified 1662 ancient sites dating from prehistoric periods to the Islamic period. Research Aim: This article aims to explore the utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, and the benefits derived from their implementation. Methodology: The research employs a descriptive-analytical approach combined with field methods. New technologies and software, such as GPS, drones, magnetometers, equipped cameras, satellite images, and software programs like GIS, Map source, and Excel, were utilized to collect information and analyze data. Findings: The use of modern technologies and computers in archaeological field surveys proved to be essential. Traditional archaeological activities, such as excavation and field surveys, are time-consuming and costly. Employing modern technologies helps in preserving ancient sites, accurately recording archaeological data, reducing errors and mistakes, and facilitating correct and accurate analysis. Creating a comprehensive and accessible database, generating statistics, and producing graphic designs and diagrams are additional advantages derived from the use of efficient technologies in archaeology. Theoretical Importance: The integration of computers and modern technologies in archaeology contributes to interdisciplinary collaborations and facilitates the involvement of specialists from various fields, such as geography, history, art history, anthropology, laboratory sciences, and computer engineering. The utilization of computers in archaeology spanned across diverse areas, including database creation, statistical analysis, graphics implementation, laboratory and engineering applications, and even artificial intelligence, which remains an unexplored area in Iranian archaeology. Data Collection and Analysis Procedures: Information was collected using modern technologies and software, capturing geographic coordinates, aerial images, archeogeophysical data, and satellite images. This data was then inputted into various software programs for analysis, including GIS, Map source, and Excel. The research employed both descriptive and analytical methods to present findings effectively. Question Addressed: The primary question addressed in this research is how the use of modern technologies and computers in archeological field surveys in Sistan, Iran, can enhance archaeological data collection, preservation, analysis, and accessibility. Conclusion: The utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, has proven to be necessary and beneficial. These technologies aid in preserving ancient sites, accurately recording archaeological data, reducing errors, and facilitating comprehensive analysis. The creation of accessible databases, statistics generation, graphic designs, and interdisciplinary collaborations are further advantages observed. It is recommended to explore the potential of artificial intelligence in Iranian archaeology as an unexplored area. The research has implications for cultural heritage organizations, archaeology students, and universities involved in archaeological field surveys in Sistan and Baluchistan province. Additionally, it contributes to enhancing the understanding and preservation of Iran's archaeological heritage.

Keywords: Iran, sistan, archaeological surveys, computer use, modern technologies

Procedia PDF Downloads 79
143 Applying the View of Cognitive Linguistics on Teaching and Learning English at UFLS - UDN

Authors: Tran Thi Thuy Oanh, Nguyen Ngoc Bao Tran

Abstract:

In the view of Cognitive Linguistics (CL), knowledge and experience of things and events are used by human beings in expressing concepts, especially in their daily life. The human conceptual system is considered to be fundamentally metaphorical in nature. It is also said that the way we think, what we experience, and what we do everyday is very much a matter of language. In fact, language is an integral factor of cognition in that CL is a family of broadly compatible theoretical approaches sharing the fundamental assumption. The relationship between language and thought, of course, has been addressed by many scholars. CL, however, strongly emphasizes specific features of this relation. By experiencing, we receive knowledge of lives. The partial things are ideal domains, we make use of all aspects of this domain in metaphorically understanding abstract targets. The paper refered to applying this theory on pragmatics lessons for major English students at University of Foreign Language Studies - The University of Da Nang, Viet Nam. We conducted the study with two third – year students groups studying English pragmatics lessons. To clarify this study, the data from these two classes were collected for analyzing linguistic perspectives in the view of CL and traditional concepts. Descriptive, analytic, synthetic, comparative, and contrastive methods were employed to analyze data from 50 students undergoing English pragmatics lessons. The two groups were taught how to transfer the meanings of expressions in daily life with the view of CL and one group used the traditional view for that. The research indicated that both ways had a significant influence on students' English translating and interpreting abilities. However, the traditional way had little effect on students' understanding, but the CL view had a considerable impact. The study compared CL and traditional teaching approaches to identify benefits and challenges associated with incorporating CL into the curriculum. It seeks to extend CL concepts by analyzing metaphorical expressions in daily conversations, offering insights into how CL can enhance language learning. The findings shed light on the effectiveness of applying CL in teaching and learning English pragmatics. They highlight the advantages of using metaphorical expressions from daily life to facilitate understanding and explore how CL can enhance cognitive processes in language learning in general and teaching English pragmatics to third-year students at the UFLS - UDN, Vietnam in personal. The study contributes to the theoretical understanding of the relationship between language, cognition, and learning. By emphasizing the metaphorical nature of human conceptual systems, it offers insights into how CL can enrich language teaching practices and enhance students' comprehension of abstract concepts.

Keywords: cognitive linguisitcs, lakoff and johnson, pragmatics, UFLS

Procedia PDF Downloads 36
142 Molecular Docking Analysis of Flavonoids Reveal Potential of Eriodictyol for Breast Cancer Treatment

Authors: Nicole C. Valdez, Vincent L. Borromeo, Conrad C. Chong, Ahmad F. Mazahery

Abstract:

Breast cancer is the most prevalent cancer worldwide, where the majority of cases are estrogen-receptor positive and involve 2 receptor proteins. The binding of estrogen to estrogen receptor alpha (ERα) promotes breast cancer growth, while it's binding to estrogen-receptor beta (ERβ) inhibits tumor growth. While natural products have been a promising source of chemotherapeutic agents, the challenge remains in finding a bioactive compound that specifically targets cancer cells, minimizing side effects on normal cells. Flavonoids are natural products that act as phytoestrogens and induce the same response as estrogen. They are able to compete with estrogen for binding to ERα; however, it has a higher binding affinity for ERβ. Their abundance in nature and low toxicity make them a potential candidate for breast cancer treatment. This study aimed to determine which particular flavonoids can specifically recognize ERβ and potentially be used for breast cancer treatment through molecular docking. A total of 206 flavonoids comprised of 97 isoflavones and 109 flavanones were collected from ZINC15, while the 3D structures of ERβ and ERα were obtained from Protein Data Bank. These flavonoid subclasses were chosen as they bind more strongly to ERs due to their chemical structure. The structures of the flavonoid ligands were converted using Open Babel, while the estrogen receptor protein structures were prepared using Autodock MGL Tools. The optimal binding site was found using BIOVIA Discovery Studio Visualizer before docking all flavonoids on both ERβ and ERα through Autodock Vina. Genistein is a flavonoid that exhibits anticancer effects by binding to ERβ, so its binding affinity was used as a baseline. Eriodictyol and 4”,6”-Di-O-Galloylprunin both exceeded genistein’s binding affinity for ERβ and was lower than its binding affinity for ERα. Of the two, eriodictyol was pursued due to its antitumor properties on a lung cancer cell line and on glioma cells. It is able to arrest the cell cycle at the G2/M phase by inhibiting the mTOR/PI3k/Akt cascade and is able to induce apoptosis via the PI3K/Akt/NF-kB pathway. Protein pathway and gene analysis were also conducted using ChEMBL and PANTHER and it was shown that eriodictyol might induce anticancer effects through the ROS1, CA7, KMO, and KDM1A genes which are involved in cell proliferation in breast cancer, non-small cell lung cancer, and other diseases. The high binding affinity of eriodictyol to ERβ, as well as its potential affected genes and antitumor effects, therefore, make it a candidate for the development of new breast cancer treatment. Verification through in vitro experiments such as checking the upregulation and downregulation of genes through qPCR and checking cell cycle arrest using a flow cytometry assay is recommended.

Keywords: breast cancer, estrogen receptor, flavonoid, molecular docking

Procedia PDF Downloads 89
141 Advances in Design Decision Support Tools for Early-stage Energy-Efficient Architectural Design: A Review

Authors: Maryam Mohammadi, Mohammadjavad Mahdavinejad, Mojtaba Ansari

Abstract:

The main driving force for increasing movement towards the design of High-Performance Buildings (HPB) are building codes and rating systems that address the various components of the building and their impact on the environment and energy conservation through various methods like prescriptive methods or simulation-based approaches. The methods and tools developed to meet these needs, which are often based on building performance simulation tools (BPST), have limitations in terms of compatibility with the integrated design process (IDP) and HPB design, as well as use by architects in the early stages of design (when the most important decisions are made). To overcome these limitations in recent years, efforts have been made to develop Design Decision Support Systems, which are often based on artificial intelligence. Numerous needs and steps for designing and developing a Decision Support System (DSS), which complies with the early stages of energy-efficient architecture design -consisting of combinations of different methods in an integrated package- have been listed in the literature. While various review studies have been conducted in connection with each of these techniques (such as optimizations, sensitivity and uncertainty analysis, etc.) and their integration of them with specific targets; this article is a critical and holistic review of the researches which leads to the development of applicable systems or introduction of a comprehensive framework for developing models complies with the IDP. Information resources such as Science Direct and Google Scholar are searched using specific keywords and the results are divided into two main categories: Simulation-based DSSs and Meta-simulation-based DSSs. The strengths and limitations of different models are highlighted, two general conceptual models are introduced for each category and the degree of compliance of these models with the IDP Framework is discussed. The research shows movement towards Multi-Level of Development (MOD) models, well combined with early stages of integrated design (schematic design stage and design development stage), which are heuristic, hybrid and Meta-simulation-based, relies on Big-real Data (like Building Energy Management Systems Data or Web data). Obtaining, using and combining of these data with simulation data to create models with higher uncertainty, more dynamic and more sensitive to context and culture models, as well as models that can generate economy-energy-efficient design scenarios using local data (to be more harmonized with circular economy principles), are important research areas in this field. The results of this study are a roadmap for researchers and developers of these tools.

Keywords: integrated design process, design decision support system, meta-simulation based, early stage, big data, energy efficiency

Procedia PDF Downloads 162
140 Relationshiop Between Occupants' Behaviour And Indoor Air Quality In Malaysian Public Hospital Outpatient Department

Authors: Farha Ibrahim, Ely Zarina Samsudin, Ahmad Razali Ishak, Jeyanthini Sathasivam

Abstract:

Introduction: Indoor air quality (IAQ) has recently gained substantial traction as the airborne transmission of infectious respiratory disease has become an increasing public health concern. Public hospital outpatient department (OPD). IAQ warrants special consideration as it is the most visited department in which patients and staff are all directly impacted by poor IAQ. However, there is limited evidence on IAQ in these settings. Moreover, occupants’ behavior like occupant’s movement and operation of door, windows and appliances, have been shown to significantly affect IAQ, yet the influence of these determinants on IAQ in such settings have not been established. Objectives: This study aims to examine IAQ in Malaysian public hospitals OPD and assess its relationships with occupants’ behavior. Methodology: A multicenter cross-sectional study in which stratified random sampling of Johor public hospitals OPD (n=6) according to building age was conducted. IAQ measurements include indoor air temperature, relative humidity (RH), air velocity (AV), carbon dioxide (CO2), total bacterial count (TBC) and total fungal count (TFC). Occupants’ behaviors in Malaysian public hospital OPD are assessed using observation forms, and results were analyzed. Descriptive statistics were performed to characterize all study variables, whereas non-parametric Spearman Rank correlation analysis was used to assess the correlation between IAQ and occupants’ behavior. Results: After adjusting for potential cofounder, the study has suggested that occupants’ movement in new building, like seated quietly, is significantly correlated with AV in new building (r 0.642, p-value 0.010), CO2 in new (r 0.772, p-value <0.001) and old building (r -0.559, p-value 0.020), TBC in new (r 0.747, p-value 0.001) and old building (r -0.559, p-value 0.020), and TFC in new (r 0.777, p-value <0.001) and old building (r -0.485, p-value 0.049). In addition, standing relaxed movement is correlated with indoor air temperature (r 0.823, p-value <0.001) in new building, CO2 (r 0.559, p-value 0.020), TBC (r 0.559, p-value 0.020), and TFC (r -0.485, p-value 0.049) in old building, while walking is correlated with AV in new building (r -0.642, p-value 0.001), CO2 in new (r -0.772, p-value <0.001) and old building (r 0.559, p-value 0.020), TBC in new (r -0.747, p-value 0.001) and old building (r 0.559, p-value 0.020), and TFC in old building (r -0.485, p-value 0.049). The indoor air temperature is significantly correlated with number of doors kept opened (r 0.522, p-value 0.046), frequency of door adjustments (r 0.753, p-value 0.001), number of windows kept opened (r 0.522, p-value 0.046), number of air-conditioned (AC) switched on (r 0.698, p-value 0.004) and frequency of AC adjustment (r 0.753, p-value 0.001) in new hospital OPD building. AV is found to be significantly correlated with number of doors kept opened (r 0.642, p-value 0.01), frequency of door adjustments (r 0.553, p-value 0.032), number of windows kept opened (r 0.642, p-value 0.01), and frequency of AC adjustment, number of fans switched on, and frequency of fans adjustment(all with r 0.553, p-value 0.032) in new building. In old hospital OPD building, the number of doors kept opened is significantly correlated with CO₂, TBC (both r -0.559, p-value 0.020) and TFC (r -0.495, p-value 0.049), frequency of door adjustment is significantly correlated with CO₂, TBC (both r-0.559, p-value 0.020) and TFC (r -0.495, p-value 0.049), number of windows kept opened is significantly correlated with CO₂, TBC (both r 0.559, p-value 0.020) and TFC (r 0.495, p-value 0.049), frequency of window adjustment is significantly correlated with CO₂,TBC (both r -0.559, p-value 0.020) and TFC (r -0.495, p-value 0.049), number of AC switched on is significantly correlated with CO₂, TBC (both r -0.559, p-value 0.020) and TFC (r -0.495, p-value 0.049),, frequency of AC adjustment is significantly correlated with CO2 (r 0.559, p-value 0.020), TBC (0.559, p-value 0.020) and TFC (r -0.495, p-value 0.049), number of fans switched on is significantly correlated with CO2, TBC (both r 0.559, p-value 0.020) and TFC (r 0.495, p-value 0.049), and frequency of fans adjustment is significantly correlated with CO2, TBC (both r -0.559, p-value 0.020) and TFC (r -0.495, p-value 0.049). Conclusion: This study provided evidence on IAQ parameters in Malaysian public hospitals OPD and significant factors that may be effective targets of prospective intervention, thus enabling stakeholders to develop appropriate policies and programs to mitigate IAQ issues in Malaysian public hospitals OPD.

Keywords: outpatient department, iaq, occupants practice, public hospital

Procedia PDF Downloads 93
139 Retrofitting Insulation to Historic Masonry Buildings: Improving Thermal Performance and Maintaining Moisture Movement to Minimize Condensation Risk

Authors: Moses Jenkins

Abstract:

Much of the focus when improving energy efficiency in buildings fall on the raising of standards within new build dwellings. However, as a significant proportion of the building stock across Europe is of historic or traditional construction, there is also a pressing need to improve the thermal performance of structures of this sort. On average, around twenty percent of buildings across Europe are built of historic masonry construction. In order to meet carbon reduction targets, these buildings will require to be retrofitted with insulation to improve their thermal performance. At the same time, there is also a need to balance this with maintaining the ability of historic masonry construction to allow moisture movement through building fabric to take place. This moisture transfer, often referred to as 'breathable construction', is critical to the success, or otherwise, of retrofit projects. The significance of this paper is to demonstrate that substantial thermal improvements can be made to historic buildings whilst avoiding damage to building fabric through surface or interstitial condensation. The paper will analyze the results of a wide range of retrofit measures installed to twenty buildings as part of Historic Environment Scotland's technical research program. This program has been active for fourteen years and has seen interventions across a wide range of building types, using over thirty different methods and materials to improve the thermal performance of historic buildings. The first part of the paper will present the range of interventions which have been made. This includes insulating mass masonry walls both internally and externally, warm and cold roof insulation and improvements to floors. The second part of the paper will present the results of monitoring work which has taken place to these buildings after being retrofitted. This will be in terms of both thermal improvement, expressed as a U-value as defined in BS EN ISO 7345:1987, and also, crucially, will present the results of moisture monitoring both on the surface of masonry walls the following retrofit and also within the masonry itself. The aim of this moisture monitoring is to establish if there are any problems with interstitial condensation. This monitoring utilizes Interstitial Hygrothermal Gradient Monitoring (IHGM) and similar methods to establish relative humidity on the surface of and within the masonry. The results of the testing are clear and significant for retrofit projects across Europe. Where a building is of historic construction the use of materials for wall, roof and floor insulation which are permeable to moisture vapor provides both significant thermal improvements (achieving a u-value as low as 0.2 Wm²K) whilst avoiding problems of both surface and intestinal condensation. As the evidence which will be presented in the paper comes from monitoring work in buildings rather than theoretical modeling, there are many important lessons which can be learned and which can inform retrofit projects to historic buildings throughout Europe.

Keywords: insulation, condensation, masonry, historic

Procedia PDF Downloads 173
138 Enhancing Institutional Roles and Managerial Instruments for Irrigation Modernization in Sudan: The Case of Gezira Scheme

Authors: Mohamed Ahmed Abdelmawla

Abstract:

Calling to achieve Millennium Development Goals (MDGs) engaged with agriculture, i.e. poverty alleviation targets, human resources involved in agricultural sectors with special emphasis on irrigation must receive wealth of practical experience and training. Increased food production, including staple food, is needed to overcome the present and future threats to food security. This should happen within a framework of sustainable management of natural resources, elimination of unsustainable methods of production and poverty reduction (i.e. axes of modernization). A didactic tool to confirm the task of wise and maximum utility is the best management and accurate measurement, as major requisites for modernization process. The key component to modernization as a warranted goal is adhering great attention to management and measurement issues via capacity building. As such, this paper stressed the issues of discharge management and measurement by Field Outlet Pipes (FOP) for selected ones within the Gezira Scheme, where randomly nine FOPs were selected as representative locations. These FOPs extended along the Gezira Main Canal at Kilo 57 areas in the South up to Kilo 194 in the North. The following steps were followed during the field data collection and measurements: For each selected FOP, a 90 v- notch thin plate weir was placed in such away that the water was directed to pass only through the notch. An optical survey level was used to measure the water head of the notch and FOP. Both calculated discharge rates as measured by the v – notch, denoted as [Qc], and the adopted discharges given by (MOIWR), denoted as [Qa], are tackled for the average of three replicated readings undertaken at each location. The study revealed that the FOP overestimates and sometimes underestimates the discharges. This is attributed to the fact that the original design specifications were not fulfilled or met at present conditions where water is allowed to flow day and night with high head fluctuation, knowing that the FOP is non modular structure, i.e. the flow depends on both levels upstream and downstream and confirmed by the results of this study. It is convenient and formative to quantify the discharge in FOP with weirs or Parshall flumes. Cropping calendar should be clearly determined and agreed upon before the beginning of the season in accordance and consistency with the Sudan Gezira Board (SGB) and Ministry of Irrigation and Water Resources. As such, the water indenting should be based on actual Crop Water Requirements (CWRs), not on rules of thumb (420 m3/feddan, irrespective of crop or time of season).

Keywords: management, measurement, MDGs, modernization

Procedia PDF Downloads 251
137 Fabrication of High-Aspect Ratio Vertical Silicon Nanowire Electrode Arrays for Brain-Machine Interfaces

Authors: Su Yin Chiam, Zhipeng Ding, Guang Yang, Danny Jian Hang Tng, Peiyi Song, Geok Ing Ng, Ken-Tye Yong, Qing Xin Zhang

Abstract:

Brain-machine interfaces (BMI) is a ground rich of exploration opportunities where manipulation of neural activity are used for interconnect with myriad form of external devices. These research and intensive development were evolved into various areas from medical field, gaming and entertainment industry till safety and security field. The technology were extended for neurological disorders therapy such as obsessive compulsive disorder and Parkinson’s disease by introducing current pulses to specific region of the brain. Nonetheless, the work to develop a real-time observing, recording and altering of neural signal brain-machine interfaces system will require a significant amount of effort to overcome the obstacles in improving this system without delay in response. To date, feature size of interface devices and the density of the electrode population remain as a limitation in achieving seamless performance on BMI. Currently, the size of the BMI devices is ranging from 10 to 100 microns in terms of electrodes’ diameters. Henceforth, to accommodate the single cell level precise monitoring, smaller and denser Nano-scaled nanowire electrode arrays are vital in fabrication. In this paper, we would like to showcase the fabrication of high aspect ratio of vertical silicon nanowire electrodes arrays using microelectromechanical system (MEMS) method. Nanofabrication of the nanowire electrodes involves in deep reactive ion etching, thermal oxide thinning, electron-beam lithography patterning, sputtering of metal targets and bottom anti-reflection coating (BARC) etch. Metallization on the nanowire electrode tip is a prominent process to optimize the nanowire electrical conductivity and this step remains a challenge during fabrication. Metal electrodes were lithographically defined and yet these metal contacts outline a size scale that is larger than nanometer-scale building blocks hence further limiting potential advantages. Therefore, we present an integrated contact solution that overcomes this size constraint through self-aligned Nickel silicidation process on the tip of vertical silicon nanowire electrodes. A 4 x 4 array of vertical silicon nanowires electrodes with the diameter of 290nm and height of 3µm has been successfully fabricated.

Keywords: brain-machine interfaces, microelectromechanical systems (MEMS), nanowire, nickel silicide

Procedia PDF Downloads 435
136 Thermoplastic-Intensive Battery Trays for Optimum Electric Vehicle Battery Pack Performance

Authors: Dinesh Munjurulimana, Anil Tiwari, Tingwen Li, Carlos Pereira, Sreekanth Pannala, John Waters

Abstract:

With the rapid transition to electric vehicles (EVs) across the globe, car manufacturers are in need of integrated and lightweight solutions for the battery packs of these vehicles. An integral part of a battery pack is the battery tray, which constitutes a significant portion of the pack’s overall weight. Based on the functional requirements, cost targets, and packaging space available, a range of materials –from metals, composites, and plastics– are often used to develop these battery trays. This paper considers the design and development of integrated thermoplastic-intensive battery trays, using the available packaging space from a representative EV battery pack. Presented as a proposed alternative are multiple concepts to integrate several connected systems such as cooling plates and underbody impact protection parts of a multi-piece incumbent battery pack. The resulting digital prototype was evaluated for several mechanical performance measures such as mechanical shock, drop, crush resistance, modal analysis, and torsional stiffness. The performance of this alternative design is then compared with the incumbent solution. In addition, insights are gleaned into how these novel approaches can be optimized to meet or exceed the performance of incumbent designs. Preliminary manufacturing feasibility of the optimal solution using injection molding and other commonly used manufacturing methods for thermoplastics is briefly explained. Then numerical and analytical evaluations are performed to show a representative Pareto front of cost vs. volume of the production parts. The proposed solution is observed to offer weight savings of up to 40% on a component level and part elimination of up to two systems in the battery pack of a typical battery EV while offering the potential to meet the required performance measures highlighted above. These conceptual solutions are also observed to potentially offer secondary benefits such as improved thermal and electrical isolations and be able to achieve complex geometrical features, thus demonstrating the ability to use the complete packaging space available in the vehicle platform considered. The detailed study presented in this paper serves as a valuable reference for researches across the globe working on the development of EV battery packs – especially those with an interest in the potential of employing alternate solutions as part of a mixed-material system to help capture untapped opportunities to optimize performance and meet critical application requirements.

Keywords: thermoplastics, lightweighting, part integration, electric vehicle battery packs

Procedia PDF Downloads 205
135 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 108
134 The Use of Political Savviness in Dealing with Workplace Ostracism: A Social Information Processing Perspective

Authors: Amy Y. Wang, Eko L. Yi

Abstract:

Can vicarious experiences of workplace ostracism affect employees’ willingness to voice? Given the increasingly interdependent nature of the modern workplace in which employees rely on social interactions to fulfill organizational goals, workplace ostracism –the extent to which an individual perceives that he or she is ignored or excluded by others in the workplace– has garnered significant interest from scholars and practitioners alike. Extending beyond conventional studies that largely focus on the perspectives and outcomes of ostracized targets, we address the indirect effects of workplace ostracism on third-party employees embedded in the same social context. Using a social information processing approach, we propose that the ostracism of coworkers acts as political information that influences third-party employees in their decisions to engage in risky and discretionary behaviors such as employee voice. To make sense of and to navigate through experiences of workplace ostracism, we posit that both political understanding and political skill allow third party employees to minimize the risks and uncertainty of voicing. This conceptual model was tested by a study involving 154 supervisor-subordinate dyads of a publicly listed bio-technology firm located in Mainland China. Each supervisor and their direct subordinates composed of a work team; each team had a minimum of two subordinates and a maximum of four subordinates. Human resources used the master list to distribute the ID coded questionnaires to the matching names. All studied constructs were measured using existing scales proved effective in previous literature. Hypotheses were tested using Confirmatory Factor Analysis and Hierarchal Multiple Regression. All three hypotheses were supported which showed that employees were less likely to engage in voice behaviors when their coworkers reported having experienced ostracism in the workplace. Results also showed a significant three-way interaction between political understanding and political skill on the relationship between coworkers’ ostracism and employee voice, indicating that political savviness is a valuable resource in mitigating ostracism’s negative and indirect effects. Our results illustrated that an employee’s coworkers being ostracized indeed adversely impacted his or her own voice behavior. However, not all individuals reacted passively to the social context; rather, we found that politically savvy individuals – possessing both political understanding and political skill – and their voice behaviors were less impacted by ostracism in their work environment. At the same time, we found that having only political understanding or only political skill was significantly less effective in mitigating ostracism’s negative effects, suggesting a necessary duality of political knowledge and political skill in combatting ostracism. Organizational implications, recommendations, and future research ideas are also discussed.

Keywords: employee voice, organizational politics, social information processing, workplace ostracism

Procedia PDF Downloads 140
133 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 114
132 Biodegradable Self-Supporting Nanofiber Membranes Prepared by Centrifugal Spinning

Authors: Milos Beran, Josef Drahorad, Ondrej Vltavsky, Martin Fronek, Jiri Sova

Abstract:

While most nanofibers are produced using electrospinning, this technique suffers from several drawbacks, such as the requirement for specialized equipment, high electrical potential, and electrically conductive targets. Consequently, recent years have seen the increasing emergence of novel strategies in generating nanofibers in a larger scale and higher throughput manner. The centrifugal spinning is simple, cheap and highly productive technology for nanofiber production. In principle, the drawing of solution filament into nanofibers using centrifugal spinning is achieved through the controlled manipulation of centrifugal force, viscoelasticity, and mass transfer characteristics of the spinning solutions. Engineering efforts of researches of the Food research institute Prague and the Czech Technical University in the field the centrifugal nozzleless spinning led to introduction of a pilot plant demonstrator NANOCENT. The main advantages of the demonstrator are lower investment cost - thanks to simpler construction compared to widely used electrospinning equipments, higher production speed, new application possibilities and easy maintenance. The centrifugal nozzleless spinning is especially suitable to produce submicron fibers from polymeric solutions in highly volatile solvents, such as chloroform, DCM, THF, or acetone. To date, submicron fibers have been prepared from PS, PUR and biodegradable polyesters, such as PHB, PLA, PCL, or PBS. The products are in form of 3D structures or nanofiber membranes. Unique self-supporting nanofiber membranes were prepared from the biodegradable polyesters in different mixtures. The nanofiber membranes have been tested for different applications. Filtration efficiencies for water solutions and aerosols in air were evaluated. Different active inserts were added to the solutions before the spinning process, such as inorganic nanoparticles, organic precursors of metal oxides, antimicrobial and wound healing compounds or photocatalytic phthalocyanines. Sintering can be subsequently carried out to remove the polymeric material and transfer the organic precursors to metal oxides, such as Si02, or photocatalytic Zn02 and Ti02, to obtain inorganic nanofibers. Electrospinning is more suitable technology to produce membranes for the filtration applications than the centrifugal nozzleless spinning, because of the formation of more homogenous nanofiber layers and fibers with smaller diameters. The self-supporting nanofiber membranes prepared from the biodegradable polyesters are especially suitable for medical applications, such as wound or burn healing dressings or tissue engineering scaffolds. This work was supported by the research grants TH03020466 of the Technology Agency of the Czech Republic.

Keywords: polymeric nanofibers, self-supporting nanofiber membranes, biodegradable polyesters, active inserts

Procedia PDF Downloads 165
131 Use of computer and peripherals in the Archaeological Surveys of Sistan in Eastern Iran

Authors: Mahyar Mehrafarin, Reza Mehrafarin

Abstract:

The Sistan region in eastern Iran is a significant archaeological area in Iran and the Middle East, encompassing 10,000 square kilometers. Previous archeological field surveys have identified 1662 ancient sites dating from prehistoric periods to the Islamic period. Research Aim: This article aims to explore the utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, and the benefits derived from their implementation. Methodology: The research employs a descriptive-analytical approach combined with field methods. New technologies and software, such as GPS, drones, magnetometers, equipped cameras, satellite images, and software programs like GIS, Map source, and Excel, were utilized to collect information and analyze data. Findings: The use of modern technologies and computers in archaeological field surveys proved to be essential. Traditional archaeological activities, such as excavation and field surveys, are time-consuming and costly. Employing modern technologies helps in preserving ancient sites, accurately recording archaeological data, reducing errors and mistakes, and facilitating correct and accurate analysis. Creating a comprehensive and accessible database, generating statistics, and producing graphic designs and diagrams are additional advantages derived from the use of efficient technologies in archaeology. Theoretical Importance: The integration of computers and modern technologies in archaeology contributes to interdisciplinary collaborations and facilitates the involvement of specialists from various fields, such as geography, history, art history, anthropology, laboratory sciences, and computer engineering. The utilization of computers in archaeology spanned across diverse areas, including database creation, statistical analysis, graphics implementation, laboratory and engineering applications, and even artificial intelligence, which remains an unexplored area in Iranian archaeology. Data Collection and Analysis Procedures: Information was collected using modern technologies and software, capturing geographic coordinates, aerial images, archeogeophysical data, and satellite images. This data was then inputted into various software programs for analysis, including GIS, Map source, and Excel. The research employed both descriptive and analytical methods to present findings effectively. Question Addressed: The primary question addressed in this research is how the use of modern technologies and computers in archeological field surveys in Sistan, Iran, can enhance archaeological data collection, preservation, analysis, and accessibility. Conclusion: The utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, has proven to be necessary and beneficial. These technologies aid in preserving ancient sites, accurately recording archaeological data, reducing errors, and facilitating comprehensive analysis. The creation of accessible databases, statistics generation, graphic designs, and interdisciplinary collaborations are further advantages observed. It is recommended to explore the potential of artificial intelligence in Iranian archaeology as an unexplored area. The research has implications for cultural heritage organizations, archaeology students, and universities involved in archaeological field surveys in Sistan and Baluchistan province. Additionally, it contributes to enhancing the understanding and preservation of Iran's archaeological heritage.

Keywords: archaeological surveys, computer use, iran, modern technologies, sistan

Procedia PDF Downloads 78
130 Predictive Pathogen Biology: Genome-Based Prediction of Pathogenic Potential and Countermeasures Targets

Authors: Debjit Ray

Abstract:

Horizontal gene transfer (HGT) and recombination leads to the emergence of bacterial antibiotic resistance and pathogenic traits. HGT events can be identified by comparing a large number of fully sequenced genomes across a species or genus, define the phylogenetic range of HGT, and find potential sources of new resistance genes. In-depth comparative phylogenomics can also identify subtle genome or plasmid structural changes or mutations associated with phenotypic changes. Comparative phylogenomics requires that accurately sequenced, complete and properly annotated genomes of the organism. Assembling closed genomes requires additional mate-pair reads or “long read” sequencing data to accompany short-read paired-end data. To bring down the cost and time required of producing assembled genomes and annotating genome features that inform drug resistance and pathogenicity, we are analyzing the performance for genome assembly of data from the Illumina NextSeq, which has faster throughput than the Illumina HiSeq (~1-2 days versus ~1 week), and shorter reads (150bp paired-end versus 300bp paired end) but higher capacity (150-400M reads per run versus ~5-15M) compared to the Illumina MiSeq. Bioinformatics improvements are also needed to make rapid, routine production of complete genomes a reality. Modern assemblers such as SPAdes 3.6.0 running on a standard Linux blade are capable in a few hours of converting mixes of reads from different library preps into high-quality assemblies with only a few gaps. Remaining breaks in scaffolds are generally due to repeats (e.g., rRNA genes) are addressed by our software for gap closure techniques, that avoid custom PCR or targeted sequencing. Our goal is to improve the understanding of emergence of pathogenesis using sequencing, comparative genomics, and machine learning analysis of ~1000 pathogen genomes. Machine learning algorithms will be used to digest the diverse features (change in virulence genes, recombination, horizontal gene transfer, patient diagnostics). Temporal data and evolutionary models can thus determine whether the origin of a particular isolate is likely to have been from the environment (could it have evolved from previous isolates). It can be useful for comparing differences in virulence along or across the tree. More intriguing, it can test whether there is a direction to virulence strength. This would open new avenues in the prediction of uncharacterized clinical bugs and multidrug resistance evolution and pathogen emergence.

Keywords: genomics, pathogens, genome assembly, superbugs

Procedia PDF Downloads 197
129 Sea Level Rise and Sediment Supply Explain Large-Scale Patterns of Saltmarsh Expansion and Erosion

Authors: Cai J. T. Ladd, Mollie F. Duggan-Edwards, Tjeerd J. Bouma, Jordi F. Pages, Martin W. Skov

Abstract:

Salt marshes are valued for their role in coastal flood protection, carbon storage, and for supporting biodiverse ecosystems. As a biogeomorphic landscape, marshes evolve through the complex interactions between sea level rise, sediment supply and wave/current forcing, as well as and socio-economic factors. Climate change and direct human modification could lead to a global decline marsh extent if left unchecked. Whilst the processes of saltmarsh erosion and expansion are well understood, empirical evidence on the key drivers of long-term lateral marsh dynamics is lacking. In a GIS, saltmarsh areal extent in 25 estuaries across Great Britain was calculated from historical maps and aerial photographs, at intervals of approximately 30 years between 1846 and 2016. Data on the key perceived drivers of lateral marsh change (namely sea level rise rates, suspended sediment concentration, bedload sediment flux rates, and frequency of both river flood and storm events) were collated from national monitoring centres. Continuous datasets did not extend beyond 1970, therefore predictor variables that best explained rate change of marsh extent between 1970 and 2016 was calculated using a Partial Least Squares Regression model. Information about the spread of Spartina anglica (an invasive marsh plant responsible for marsh expansion around the globe) and coastal engineering works that may have impacted on marsh extent, were also recorded from historical documents and their impacts assessed on long-term, large-scale marsh extent change. Results showed that salt marshes in the northern regions of Great Britain expanded an average of 2.0 ha/yr, whilst marshes in the south eroded an average of -5.3 ha/yr. Spartina invasion and coastal engineering works could not explain these trends since a trend of either expansion or erosion preceded these events. Results from the Partial Least Squares Regression model indicated that the rate of relative sea level rise (RSLR) and availability of suspended sediment concentration (SSC) best explained the patterns of marsh change. RSLR increased from 1.6 to 2.8 mm/yr, as SSC decreased from 404.2 to 78.56 mg/l along the north-to-south gradient of Great Britain, resulting in the shift from marsh expansion to erosion. Regional differences in RSLR and SSC are due to isostatic rebound since deglaciation, and tidal amplitudes respectively. Marshes exposed to low RSLR and high SSC likely leads to sediment accumulation at the coast suitable for colonisation by marsh plants and thus lateral expansion. In contrast, high RSLR with are likely not offset deposition under low SSC, thus average water depth at the marsh edge increases, allowing larger wind-waves to trigger marsh erosion. Current global declines in sediment flux to the coast are likely to diminish the resilience of salt marshes to RSLR. Monitoring and managing suspended sediment supply is not common-place, but may be critical to mitigating coastal impacts from climate change.

Keywords: lateral saltmarsh dynamics, sea level rise, sediment supply, wave forcing

Procedia PDF Downloads 134
128 Profitability and Productivity Performance of the Selected Public Sector Banks in India

Authors: Sudipto Jana

Abstract:

Background and significance of the study: Banking industry performs as a catalyst for industrial growth and agricultural growth, however, as well involves the existence and welfare of the citizens. The banking system in India was described by unmatched growth and the recreation of bunch making in the pre-liberalization era. At the time of financial sector reforms Reserve Bank of India issued a regulatory norm concerning capital adequacy, income recognition, asset classification and provisioning that have increasingly precede meeting by means of the international paramount performs. Bank management ceaselessly manages the triumph, effectiveness, productivity and performance of the bank as good performance, high productivity and efficiency authorizes the triumph of the bank management targets as well as aims of bank. In a comparable move toward performance of any economy depends upon the expediency and effectiveness of its financial system of nation establishes its economic growth indicators. Profitability and productivity are the most important relevant parameters of any banking group. Keeping in view of this, this study examines the profitability and productivity performance of the selected public sector banks in India. Methodology: This study is based on secondary data obtained from Reserve Bank of India database for the periods between 2006 and 2015. This study purposively selects four types of commercial banks, namely, State Bank of India, United Bank of India, Punjab National Bank and Allahabad Bank. In order to analyze the performance with relation to profitability and productivity, productivity performance indicators in terms of capital adequacy ratio, burden ratio, business per employee, spread per employee and advances per employee and profitability performance indicators in terms of return on assets, return on equity, return on advances and return on branch have been considered. In the course of analysis, descriptive statistics, correlation statistics and multiple regression have been used. Major findings: Descriptive statistics indicate that productivity performance of State Bank of India is very satisfactory than other public sector banks in India. But management of productivity is unsatisfactory in case of all the public sector banks under study. Correlation statistics point out that profitability of the public sector banks are strongly positively related with productivity performance in case of all the public sector banks under study. Multiple regression test results show that when profitability increases profit per employee increases and net non-performing assets decreases. Concluding statements: Productivity and profitability performance of United Bank of India, Allahabad Bank and Punjab National Bank are unsatisfactory due to poor management of asset quality as well as management efficiency. It needs government’s interference so that profitability and productivity performance are increased in the near future.

Keywords: India, productivity, profitability, public sector banks

Procedia PDF Downloads 429
127 Demographic Assessment and Evaluation of Degree of Lipid Control in High Risk Indian Dyslipidemia Patients

Authors: Abhijit Trailokya

Abstract:

Background: Cardiovascular diseases (CVD’s) are the major cause of morbidity and mortality in both developed and developing countries. Many clinical trials have demonstrated that low-density lipoprotein cholesterol (LDL-C) lowering, reduces the incidence of coronary and cerebrovascular events across a broad spectrum of patients at risk. Guidelines for the management of patients at risk have been established in Europe and North America. The guidelines have advocated progressively lower LDL-C targets and more aggressive use of statin therapy. In Indian patients, comprehensive data on dyslipidemia management and its treatment outcomes are inadequate. There is lack of information on existing treatment patterns, the patient’s profile being treated, and factors that determine treatment success or failure in achieving desired goals. Purpose: The present study was planned to determine the lipid control status in high-risk dyslipidemic patients treated with lipid-lowering therapy in India. Methods: This cross-sectional, non-interventional, single visit program was conducted across 483 sites in India where male and female patients with high-risk dyslipidemia aged 18 to 65 years who had visited for a routine health check-up to their respective physician at hospital or a healthcare center. Percentage of high-risk dyslipidemic patients achieving adequate LDL-C level (< 70 mg/dL) on lipid-lowering therapy and the association of lipid parameters with patient characteristics, comorbid conditions, and lipid lowering drugs were analysed. Results: 3089 patients were enrolled in the study; of which 64% were males. LDL-C data was available for 95.2% of the patients; only 7.7% of these patients achieved LDL-C levels < 70 mg/dL on lipid-lowering therapy, which may be due to inability to follow therapeutic plans, poor compliance, or inadequate counselling by physician. The physician’s lack of awareness about recent treatment guidelines also might contribute to patients’ poor adherence, not explaining adequately the benefit and risks of a medication, not giving consideration to the patient’s life style and the cost of medication. Statin was the most commonly used anti-dyslipidemic drug across population. The higher proportion of patients had the comorbid condition of CVD and diabetes mellitus across all dyslipidemic patients. Conclusion: As per the European Society of Cardiology guidelines the ideal LDL-C levels in high risk dyslipidemic patients should be less than 70%. In the present study, 7.7% of the patients achieved LDL-C levels < 70 mg/dL on lipid lowering therapy which is very less. Most of high risk dyslipidemic patients in India are on suboptimal dosage of statin. So more aggressive and high dosage statin therapy may be required to achieve target LDLC levels in high risk Indian dyslipidemic patients.

Keywords: cardiovascular disease, diabetes mellitus, dyslipidemia, LDL-C, lipid lowering drug, statins

Procedia PDF Downloads 201
126 The Effect of Social Media Influencer on Boycott Participation through Attitude toward the Offending Country in a Situational Animosity Context

Authors: Hsing-Hua Stella Chang, Mong-Ching Lin, Cher-Min Fong

Abstract:

Using surrogate boycotts as a coercive tactic to force the offending party into changing its approaches has been increasingly significant over the last several decades, and is expected to increase in the future. Research shows that surrogate boycotts are often triggered by controversial international events, and particular foreign countries serve as the offending party in the international marketplace. In other words, multinational corporations are likely to become surrogate boycott targets in overseas markets because of the animosity between their home and host countries. Focusing on the surrogate boycott triggered by a severe situation animosity, this research aims to examine how social media influencers (SMIs) serving as electronic key opinion leaders (EKOLs) in an international crisis facilitate and organize a boycott, and persuade consumers to participate in the boycott. This research suggests that SMIs could be a particularly important information source in a surrogate boycott sparked by a situation of animosity. This research suggests that under such a context, SMIs become a critical information source for individuals to enhance and update their understanding of the event because, unlike traditional media, social media serve as a platform for instant and 24-hour non-stop information access and dissemination. The Xinjiang cotton event was adopted as the research context, which was viewed as an ongoing inter-country conflict, reflecting a crisis, which provokes animosity against the West. Through online panel services, both studies recruited Mainland Chinese nationals to be respondents to the surveys. The findings show that: 1. Social media influencer message is positively related to a negative attitude toward the offending country. 2. Attitude toward the offending country is positively related to boycotting participation. To address the unexplored question – of the effect of social media influencer influence on consumer participation in boycotts, this research presents a finer-grained examination of boycott motivation, with a special focus on a situational animosity context. This research is split into two interrelated parts. In the first part, this research shows that attitudes toward the offending country can be socially constructed by the influence of social media influencers in a situational animosity context. The study results show that consumers perceive different strengths of social pressure related to various levels of influencer messages and thus exhibit different levels of attitude toward the offending country. In the second part, this research further investigates the effect of attitude toward the offending country on boycott participation. The study findings show that such attitude exacerbated the effect of social media influencer messages on boycott participation in a situation of animosity.

Keywords: animosity, social media marketing, boycott, attitude toward the offending country

Procedia PDF Downloads 111
125 Harvesting Value-added Products Through Anodic Electrocatalytic Upgrading Intermediate Compounds Utilizing Biomass to Accelerating Hydrogen Evolution

Authors: Mehran Nozari-Asbemarz, Italo Pisano, Simin Arshi, Edmond Magner, James J. Leahy

Abstract:

Integrating electrolytic synthesis with renewable energy makes it feasible to address urgent environmental and energy challenges. Conventional water electrolyzers concurrently produce H₂ and O₂, demanding additional procedures in gas separation to prevent contamination of H₂ with O₂. Moreover, the oxygen evolution reaction (OER), which is sluggish and has a low overall energy conversion efficiency, does not deliver a significant value product on the electrode surface. Compared to conventional water electrolysis, integrating electrolytic hydrogen generation from water with thermodynamically more advantageous aqueous organic oxidation processes can increase energy conversion efficiency and create value-added compounds instead of oxygen at the anode. One strategy is to use renewable and sustainable carbon sources from biomass, which has a large annual production capacity and presents a significant opportunity to supplement carbon sourced from fossil fuels. Numerous catalytic techniques have been researched in order to utilize biomass economically. Because of its safe operating conditions, excellent energy efficiency, and reasonable control over production rate and selectivity using electrochemical parameters, electrocatalytic upgrading stands out as an appealing choice among the numerous biomass refinery technologies. Therefore, we propose a broad framework for coupling H2 generation from water splitting with oxidative biomass upgrading processes. Four representative biomass targets were considered for oxidative upgrading that used a hierarchically porous CoFe-MOF/LDH @ Graphite Paper bifunctional electrocatalyst, including glucose, ethanol, benzyl, furfural, and 5-hydroxymethylfurfural (HMF). The potential required to support 50 mA cm-2 is considerably lower than (~ 380 mV) the potential for OER. All four compounds can be oxidized to yield liquid byproducts with economic benefit. The electrocatalytic oxidation of glucose to the value-added products, gluconic acid, glucuronic acid, and glucaric acid, was examined in detail. The cell potential for combined H₂ production and glucose oxidation was substantially lower than for water splitting (1.44 V(RHE) vs. 1.82 V(RHE) for 50 mA cm-2). In contrast, the oxidation byproduct at the anode was significantly more valuable than O₂, taking advantage of the more favorable glucose oxidation in comparison to the OER. Overall, such a combination of HER and oxidative biomass valorization using electrocatalysts prevents the production of potentially explosive H₂/O₂mixtures and produces high-value products at both electrodes with lower voltage input, thereby increasing the efficiency and activity of electrocatalytic conversion.

Keywords: biomass, electrocatalytic, glucose oxidation, hydrogen evolution

Procedia PDF Downloads 97
124 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 34
123 The Current Application of BIM - An Empirical Study Focusing on the BIM-Maturity Level

Authors: Matthias Stange

Abstract:

Building Information Modelling (BIM) is one of the most promising methods in the building design process and plays an important role in the digitalization of the Architectural, Engineering, and Construction (AEC) Industry. The application of BIM is seen as the key enabler for increasing productivity in the construction industry. The model-based collaboration using the BIM method is intended to significantly reduce cost increases, schedule delays, and quality problems in the planning and construction of buildings. Numerous qualitative studies based on expert interviews support this theory and report perceived benefits from the use of BIM in terms of achieving project objectives related to cost, schedule, and quality. However, there is a large research gap in analysing quantitative data collected from real construction projects regarding the actual benefits of applying BIM based on representative sample size and different application regions as well as different project typologies. In particular, the influence of the project-related BIM maturity level is completely unexplored. This research project examines primary data from 105 construction projects worldwide using quantitative research methods. Projects from the areas of residential, commercial, and industrial construction as well as infrastructure and hydraulic engineering were examined in application regions North America, Australia, Europe, Asia, MENA region, and South America. First, a descriptive data analysis of 6 independent project variables (BIM maturity level, application region, project category, project type, project size, and BIM level) were carried out using statistical methods. With the help of statisticaldata analyses, the influence of the project-related BIM maturity level on 6 dependent project variables (deviation in planning time, deviation in construction time, number of planning collisions, frequency of rework, number of RFIand number of changes) was investigated. The study revealed that most of the benefits of using BIM perceived through numerous qualitative studies have not been confirmed. The results of the examined sample show that the application of BIM did not have an improving influence on the dependent project variables, especially regarding the quality of the planning itself and the adherence to the schedule targets. The quantitative research suggests the conclusion that the BIM planning method in its current application has not (yet) become a recognizable increase in productivity within the planning and construction process. The empirical findings indicate that this is due to the overall low level of BIM maturity in the projects of the examined sample. As a quintessence, the author suggests that the further implementation of BIM should primarily focus on an application-oriented and consistent development of the project-related BIM maturity level instead of implementing BIM for its own sake. Apparently, there are still significant difficulties in the interweaving of people, processes, and technology.

Keywords: AEC-process, building information modeling, BIM maturity level, project results, productivity of the construction industry

Procedia PDF Downloads 73
122 The Role of Social Media in the Rise of Islamic State in India: An Analytical Overview

Authors: Yasmeen Cheema, Parvinder Singh

Abstract:

The evolution of Islamic State (acronym IS) has an ultimate goal of restoring the caliphate. IS threat to the global security is main concern of international community but has also raised a factual concern for India about the regular radicalization of IS ideology among Indian youth. The incident of joining Arif Ejaz Majeed, an Indian as ‘jihadist’ in IS has set strident alarm in law & enforcement agencies. On 07.03.2017, many people were injured in an Improvised Explosive Device (IED) blast on-board of Bhopal Ujjain Express. One perpetrator of this incident was killed in encounter with police. But, the biggest shock is that the conspiracy was pre-planned and the assailants who carried out the blast were influenced by the ideology perpetrated by the Islamic State. This is the first time name of IS has cropped up in a terror attack in India. It is a red indicator of violent presence of IS in India, which is spreading through social media. The IS have the capacity to influence the younger Muslim generation in India through its brutal and aggressive propaganda videos, social media apps and hatred speeches. It is a well known fact that India is on the radar of IS, as well on its ‘Caliphate Map’. IS uses Twitter, Facebook and other social media platforms constantly. Islamic State has used enticing videos, graphics, and articles on social media and try to influence persons from India & globally that their jihad is worthy. According to arrested perpetrator of IS in different cases in India, the most of Indian youths are victims to the daydreams which are fondly shown by IS. The dreams that the Muslim empire as it was before 1920 can come back with all its power and also that the Caliph and its caliphate can be re-established are shown by the IS. Indian Muslim Youth gets attracted towards these euphemistic ideologies. Islamic State has used social media for disseminating its poisonous ideology, recruitment, operational activities and for future direction of attacks. IS through social media inspired its recruits & lone wolfs to continue to rely on local networks to identify targets and access weaponry and explosives. Recently, a pro-IS media group on its Telegram platform shows Taj Mahal as the target and suggested mode of attack as a Vehicle Born Improvised Explosive Attack (VBIED). Islamic State definitely has the potential to destroy the Indian national security & peace, if timely steps are not taken. No doubt, IS has used social media as a critical mechanism for recruitment, planning and executing of terror attacks. This paper will therefore examine the specific characteristics of social media that have made it such a successful weapon for Islamic State. The rise of IS in India should be viewed as a national crisis and handled at the central level with efficient use of modern technology.

Keywords: ideology, India, Islamic State, national security, recruitment, social media, terror attack

Procedia PDF Downloads 230
121 SLAPP Suits: An Encroachment On Human Rights Of A Global Proportion And What Can Be Done About It

Authors: Laura Lee Prather

Abstract:

A functioning democracy is defined by various characteristics, including freedom of speech, equality, human rights, rule of law and many more. Lawsuits brought to intimidate speakers, drain the resources of community members, and silence journalists and others who speak out in support of matters of public concern are an abuse of the legal system and an encroachment of human rights. The impact can have a broad chilling effect, deterring others from speaking out against abuse. This article aims to suggest ways to address this form of judicial harassment. In 1988, University of Denver professors George Pring and Penelope Canan coined the term “SLAPP” when they brought to light a troubling trend of people getting sued for speaking out about matters of public concern. Their research demonstrated that thousands of people engaging in public debate and citizen involvement in government have been and will be the targets of multi-million-dollar lawsuits for the purpose of silencing them and dissuading others from speaking out in the future. SLAPP actions chill information and harm the public at large. Professors Pring and Canan catalogued a tsunami of SLAPP suits filed by public officials, real estate developers and businessmen against environmentalists, consumers, women’s rights advocates and more. SLAPPs are now seen in every region of the world as a means to intimidate people into silence and are viewed as a global affront to human rights. Anti-SLAPP laws are the antidote to SLAPP suits and while commonplace in the United States are only recently being considered in the EU and the UK. This researcher studied more than thirty years of Anti-SLAPP legislative policy in the U.S., the call for evidence and resultant EU Commission’s Anti-SLAPP Directive and Member States Recommendations, the call for evidence by the UK Ministry of Justice, response and Model Anti-SLAPP law presented to UK Parliament, as well as, conducted dozens of interviews with NGO’s throughout the EU, UK, and US to identify varying approaches to SLAPP lawsuits, public policy, and support for SLAPP victims. This paper identifies best practices taken from the US, EU and UK that can be implemented globally to help combat SLAPPs by: (1) raising awareness about SLAPPs, how to identify them, and recognizing habitual abusers of the court system; (2) engaging governments in the policy discussion in combatting SLAPPs and supporting SLAPP victims; (3) educating judges in recognizing SLAPPs an general training on encroachment of human rights; (4) and holding lawyers accountable for ravaging the rule of law.

Keywords: Anti-SLAPP Laws and Policy, Comparative media law and policy, EU Anti-SLAPP Directive and Member Recommendations, International Human Rights of Freedom of Expression

Procedia PDF Downloads 68
120 Essential Oils of Polygonum L. Plants Growing in Kazakhstan and Their Antibacterial and Antifungal Activity

Authors: Dmitry Yu. Korulkin, Raissa A. Muzychkina

Abstract:

Bioactive substances of plant origin can be one of the advanced means of solution to the issue of combined therapy to inflammation. The main advantages of medical plants are softness and width of their therapeutic effect on an organism, the absence of side effects and complications even if the used continuously, high tolerability by patients. Moreover, medial plants are often the only and (or) cost-effective sources of natural biologically active substances and medicines. Along with other biologically active groups of chemical compounds, essential oils with wide range of pharmacological effects became very ingrained in medical practice. Essential oil was obtained by the method hydrodistillation air-dry aerial part of Polygonum L. plants using Clevenger apparatus. Qualitative composition of essential oils was analyzed by chromatography-mass-spectrometry method using Agilent 6890N apparatus. The qualitative analysis is based on the comparison of retention time and full mass-spectra with respective data on components of reference oils and pure compounds, if there were any, and with the data of libraries of mass-spectra Wiley 7th edition and NIST 02. The main components of essential oil are for: Polygonum amphibium L. - γ-terpinene, borneol, piperitol, 1,8-cyneole, α-pinene, linalool, terpinolene and sabinene; Polygonum minus Huds. Fl. Angl. – linalool, terpinolene, camphene, borneol, 1,8-cyneole, α-pinene, 4-terpineol and 1-octen-3-ol; Polygonum alpinum All. – camphene, sabinene, 1-octen-3-ol, 4-carene, p- and o-cymol, γ-terpinene, borneol, -terpineol; Polygonum persicaria L. - α-pinene, sabinene, -terpinene, 4-carene, 1,8-cyneole, borneol, 4-terpineol. Antibacterial activity was researched relating to strains of gram-positive bacteria Staphylococcus aureus, Bacillus subtilis, Streptococcus agalacticae, relating to gram-negative strain Escherichia coli and to yeast fungus Сandida albicans using agar diffusion method. The medicines of comparison were gentamicin for bacteria and nystatin for yeast fungus Сandida albicans. It has been shown that Polygonum L. essential oils has moderate antibacterial effect to gram-positive microorganisms and weak antifungal activity to Candida albicans yeast fungus. At the second stage of our researches wound healing properties of ointment form of 3% essential oil was researched on the model of flat dermal wounds. To assess the influence of essential oil on healing processes the model of flat dermal wound. The speed of wound healing on rats of different groups was judged based on assessment the area of a wound from time to time. During research of wound healing properties disturbance of integral in neither group: general condition and behavior of animals, food intake, and excretion. Wound healing action of 3% ointment on base of Polygonum L. essential oil and polyethyleneglycol is comparable with the action of reference substances. As more favorable healing dynamics was observed in the experimental group than in control group, the tested ointment can be deemed more promising for further detailed study as wound healing means.

Keywords: antibacterial, antifungal, bioactive substances, essential oils, isolation, Polygonum L.

Procedia PDF Downloads 532
119 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing

Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari

Abstract:

A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.

Keywords: bacteria chromosome, bacterial identification, sequence, primer generation

Procedia PDF Downloads 193
118 Regional Disparities in Microfinance Distribution: Evidence from Indian States

Authors: Sunil Sangwan, Narayan Chandra Nayak

Abstract:

Over the last few decades, Indian banking system has achieved remarkable growth in its credit volume. However, one of the most disturbing facts about this growth is the uneven distribution of financial services across regions. Having witnessed limited success from all the earlier efforts towards financial inclusion targeting the rural poor and the underprivileged, provision of microfinance, of late, has emerged as a supplementary mechanism. There are two prominent modes of microfinance distribution in India namely Bank-SHG linkage (SBLP) and private Microfinance Institutions (MFIs). Ironically, such efforts also seem to have failed to achieve the desired targets as the microfinance services have witnessed skewed distribution across the states of the country. This study attempts to make a comparative analysis of the geographical skew of the SBLP and MFI in India and examine the factors influencing their regional distribution. The results indicate that microfinance services are largely concentrated in the southern region, accounting for about 50% of all microfinance clients and 49% of all microfinance loan portfolios. This is distantly followed by an eastern region where client outreach is close to 25% only. The north-eastern, northern, central, and western regions lag far behind in microfinance sectors, accounting for only 4%, 4%, 10%, and 7 % client outreach respectively. The penetration of SHGs is equally skewed, with the southern region accounting for 46% of client outreach and 70% of loan portfolios followed by an eastern region with 21% of client outreach and 13% of the loan portfolio. Contrarily, north-eastern, northern, central, western and eastern regions account for 5%, 5%, 10%, and 13% of client outreach and 3%, 3%, 7%, and 4% of loan portfolios respectively. The study examines the impact of literacy rate, rural poverty, population density, primary sector share, non-farm activities, loan default behavior and bank penetration on the microfinance penetration. The study is limited to 17 major states of the country over the period 2008-2014. The results of the GMM estimation indicate the significant positive impact of literacy rate, non-farm activities and population density on microfinance penetration across the states, while the rise in loan default seems to deter it. Rural poverty shows the significant negative impact on the spread of SBLP, while it has a positive impact on MFI penetration, hence indicating the policy of exclusion being adhered to by the formal financial system especially towards the poor. However, MFIs seem to be working as substitute mechanisms to banks to fill the gap. The findings of the study are a pointer towards enhancing financial literacy, non-farm activities, rural bank penetration and containing loan default for achieving greater microfinance prevalence.

Keywords: bank penetration, literacy rate, microfinance, primary sector share, rural non-farm activities, rural poverty

Procedia PDF Downloads 232