Search results for: automatic test system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25181

Search results for: automatic test system

2981 Bayesian System and Copula for Event Detection and Summarization of Soccer Videos

Authors: Dhanuja S. Patil, Sanjay B. Waykar

Abstract:

Event detection is a standout amongst the most key parts for distinctive sorts of area applications of video data framework. Recently, it has picked up an extensive interest of experts and in scholastics from different zones. While detecting video event has been the subject of broad study efforts recently, impressively less existing methodology has considered multi-model data and issues related efficiency. Start of soccer matches different doubtful circumstances rise that can't be effectively judged by the referee committee. A framework that checks objectively image arrangements would prevent not right interpretations because of some errors, or high velocity of the events. Bayesian networks give a structure for dealing with this vulnerability using an essential graphical structure likewise the probability analytics. We propose an efficient structure for analysing and summarization of soccer videos utilizing object-based features. The proposed work utilizes the t-cherry junction tree, an exceptionally recent advancement in probabilistic graphical models, to create a compact representation and great approximation intractable model for client’s relationships in an interpersonal organization. There are various advantages in this approach firstly; the t-cherry gives best approximation by means of junction trees class. Secondly, to construct a t-cherry junction tree can be to a great extent parallelized; and at last inference can be performed utilizing distributed computation. Examination results demonstrates the effectiveness, adequacy, and the strength of the proposed work which is shown over a far reaching information set, comprising more soccer feature, caught at better places.

Keywords: summarization, detection, Bayesian network, t-cherry tree

Procedia PDF Downloads 308
2980 Molecular Insights into the Genetic Integrity of Long-Term Micropropagated Clones Using Start Codon Targeted (SCoT) Markers: A Case Study with Ansellia africana, an Endangered, Medicinal Orchid

Authors: Paromik Bhattacharyya, Vijay Kumar, Johannes Van Staden

Abstract:

Micropropagation is an important tool for the conservation of threatened and commercially important plant species of which orchids deserve special attention. Ansellia africana is one such medicinally important orchid species having much commercial significance. Thus, development of regeneration protocols for producing clonally stable regenerates using axillary buds is of much importance. However, for large-scale micropropagation to become not only successful but also acceptable by end-users, somaclonal variations occurring in the plantlets need to be eliminated. In the light of the various factors (genotype, ploidy level, in vitro culture age, explant and culture type, etc.) that may account for the somaclonal variations of divergent genetic changes at the cellular and molecular levels, genetic analysis of micropropagated plants using a multidisciplinary approach is of utmost importance. In the present study, the clonal integrity of the long term micropropagated A. africana plants were assessed using advanced molecular marker system i.e. Start Codon Targeted Polymorphism (SCoT). Our studies recorded a clonally stable regeneration protocol for A. africana with a very high degree of clonal fidelity amongst the regenerates. The results obtained from these molecular analyses could help in modifying the regeneration protocols for obtaining clonally stable true to type plantlets for sustainable commercial use.

Keywords: medicinal orchid micropropagation, start codon targeted polymorphism (SCoT), RAP), traditional African pharmacopoeia, genetic fidelity

Procedia PDF Downloads 418
2979 Bi-Layer Electro-Conductive Nanofibrous Conduits for Peripheral Nerve Regeneration

Authors: Niloofar Nazeri, Mohammad Ali Derakhshan, Reza Faridi Majidi, Hossein Ghanbari

Abstract:

Injury of peripheral nervous system (PNS) can lead to loss of sensation or movement. To date, one of the challenges for surgeons is repairing large gaps in PNS. To solve this problem, nerve conduits have been developed. Conduits produced by means of electrospinning can mimic extracellular matrix and provide enough surface for further functionalization. In this research, a conductive bilayer nerve conduit with poly caprolactone (PCL), poly (lactic acid co glycolic acid) (PLGA) and MWCNT for promoting peripheral nerve regeneration was fabricated. The conduit was made of longitudinally aligned PLGA nanofibrous sheets in the lumen to promote nerve regeneration and randomly oriented PCL nanofibers on the outer surface for mechanical support. The intra-luminal guidance channel was made out of conductive aligned nanofibrous rolled sheets which are coated with laminin via dopamine. Different properties of electrospun scaffolds were investigated by using contact angle, mechanical strength, degradation time, scanning electron microscopy (SEM) and X-ray photoelectron spectroscopy (XPS). The SEM analysis was shown that size range of nanofibrous mat were about 600-750 nm and MWCNTs deposited between nanofibers. The XPS result was shown that laminin attached to the nanofibers surface successfully. The contact-angle and tensile tests analysis revealed that scaffolds have good hydrophilicity and enough mechanical strength. In vitro studies demonstrated that this conductive surface was able to enhance the attachment and proliferation of PC12 and Schwann cells. We concluded that this bilayer composite conduit has good potential for nerve regeneration.

Keywords: conductive, conduit, laminin, MWCNT

Procedia PDF Downloads 189
2978 A Review on Potential Utilization of Water Hyacinth (Eichhornia crassipes) as Livestock Feed with Particular Emphasis to Developing Countries in Africa

Authors: Shigdaf Mekuriaw, Firew Tegegne, A. Tsunekawa, Dereje Tewabe

Abstract:

The purpose of this paper is to make a comprehensive review on the use of water hyacinth (Eichhornia crassipes) as a potential livestock feed and argue its utilization as complementary strategy to other control methods. Water Hyacinth is one of the most noxious plant invaders of rivers and lakes. Such weeds cause environmental disaster and interfere with economic and recreational activities such as water transportation and fishing. Economic impacts of the weed in seven African countries have been estimated at between 20-50 million US$ every year. It would, therefore, be prudent to suggest utilization as a complementary control method. The majority of people in developing countries are dependent on traditional and inefficient crop-livestock production system that constrains their ability to enhance economic productivity and quality of life. Livestock in developing countries faces shortage of feed, especially during the long dry seasons. Existing literature shows the use of water hyacinth as livestock and fish feed. The chemical composition of water hyacinth varies considerably. Due to its relatively high crude protein (CP) content (5.8-20.0%), water hyacinth can be considered as a potential protein supplement for livestock which commonly feed cereal crop residues whose contribution as source of feed is increasing in Africa. Though the effects of anti-nutritional factors (ANFs) present in water hyacinth is not investigated, their concentrations are not above threshold hinder its utilization as livestock feed. In conclusion, water hyacinth could provide large quantities of nutritious feed for animals. Like other feeds, water hyacinth may not be offered as a sole feed and based on existing literature its optimum inclusion level reaches 50%.

Keywords: Africa, livestock feed, water bodies, water hyacinth and weed control method

Procedia PDF Downloads 373
2977 Design of Hybrid Auxetic Metamaterials for Enhanced Energy Absorption under Compression

Authors: Ercan Karadogan, Fatih Usta

Abstract:

Auxetic materials have a negative Poisson’s ratio (NPR), which is not often found in nature. They are metamaterials that have potential applications in many engineering fields. Mechanical metamaterials are synthetically designed structures with unusual mechanical properties. These mechanical properties are dependent on the properties of the matrix structure. They have the following special characteristics, i.e., improved shear modulus, increased energy absorption, and intensive fracture toughness. Non-auxetic materials compress transversely when they are stretched. The system naturally is inclined to keep its density constant. The transversal compression increases the density to balance the loss in the longitudinal direction. This study proposes to improve the crushing performance of hybrid auxetic materials. The re-entrant honeycomb structure has been combined with a star honeycomb, an S-shaped unit cell, a double arrowhead, and a structurally hexagonal re-entrant honeycomb by 9 X 9 cells, i.e., the number of cells is 9 in the lateral direction and 9 in the vertical direction. The Finite Element (FE) and experimental methods have been used to determine the compression behavior of the developed hybrid auxetic structures. The FE models have been developed by using Abaqus software. The specimens made of polymer plastic materials have been 3D printed and subjected to compression loading. The results are compared in terms of specific energy absorption and strength. This paper describes the quasi-static crushing behavior of two types of hybrid lattice structures (auxetic + auxetic and auxetic + non-auxetic). The results show that the developed hybrid structures can be useful to control collapse mechanisms and present larger energy absorption compared to conventional re-entrant auxetic structures.

Keywords: auxetic materials, compressive behavior, metamaterials, negative Poisson’s ratio

Procedia PDF Downloads 87
2976 Retrospective/Prospective Analysis of Guideline Implementation and Transfusion Rates

Authors: B. Kenny

Abstract:

The complications associated with transfusions are well documented, with the serious hazards of transfusion (SHOT) reporting system continuing to report deaths and serious morbidity due to the transfusion of allogenic blood. Many different sources including the TRICC trial, NHMRC and Cochrane recommending similar transfusion triggers/guidelines. Recent studies found the rate of infection (deep infection, wound infection, chest infection, urinary tract infection, and others) were purely a dose response relationship, increasing the Relative Risk by 3.44. It was also noted that each transfused patient stayed in hospital for one additional day. We hypothesise that providing an approved/standardised, guideline with a graphical summary of decision pathways for anaemic patients will reduce unnecessary transfusions. We retrospectively assessed patients undergoing primary knee or hip arthroplasties over a 4 year period, 1459 patients. Of these, 339 (23.24%) patients received allogenic blood transfusions and 858 units of blood were transfused, 9.14% of patients transfused had haemoglobin levels above 100 g/L, 7.67% of patients were transfused without knowing the haemoglobin level, 24 hours prior to transfusion initiation and 4.5% had possible transfusion reactions. Overall, 17% of allogenic transfusions topatients admitted to the Orthopaedic department within a 4 year period were outside NHMRC and Cochrane guidelines/recommendations. If our transfusion frequency is compared with that of other authors/hospitals, transfusion rates are consistently being high. We subsequently implemented a simple guideline for transfusion initiation. This guideline was then assessed. We found the transfusion rate post transfusion implementation to be significantly lower, without increase in patient morbidity or mortalitiy, p <0.001). Transfusion rates and patient outcome can be optimized by a simple graphical aid for decision making.

Keywords: transfusion, morbidity, mortality, neck of femur, fracture, arthroplasty, rehabilitation

Procedia PDF Downloads 231
2975 Evaluating the Implementation of a Quality Management System in the COVID-19 Diagnostic Laboratory of a Tertiary Care Hospital in Delhi

Authors: Sukriti Sabharwal, Sonali Bhattar, Shikhar Saxena

Abstract:

Introduction: COVID-19 molecular diagnostic laboratory is the cornerstone of the COVID-19 disease diagnosis as the patient’s treatment and management protocol depend on the molecular results. For this purpose, it is extremely important that the laboratory conducting these results adheres to the quality management processes to increase the accuracy and validity of the reports generated. We started our own molecular diagnostic setup at the onset of the pandemic. Therefore, we conducted this study to generate our quality management data to help us in improving on our weak points. Materials and Methods: A total of 14561 samples were evaluated by the retrospective observational method. The quality variables analysed were classified into pre-analytical, analytical, and post-analytical variables, and the results were presented in percentages. Results: Among the pre-analytical variables, sample leaking was the most common cause of the rejection of samples (134/14561, 0.92%), followed by non-generation of SRF ID (76/14561, 0.52%) and non-compliance to triple packaging (44/14561, 0.3%). The other pre-analytical aspects assessed were incomplete patient identification (17/14561, 0.11%), insufficient quantity of samples (12/14561, 0.08%), missing forms/samples (7/14561, 0.04%), samples in the wrong vials/empty VTM tubes (5/14561, 0.03%) and LIMS entry not done (2/14561, 0.01%). We are unable to obtain internal quality control in 0.37% of samples (55/14561). We also experienced two incidences of cross-contamination among the samples resulting in false-positive results. Among the post-analytical factors, a total of 0.07% of samples (11/14561) could not be dispatched within the stipulated time frame. Conclusion: Adherence to quality control processes is foremost for the smooth running of any diagnostic laboratory, especially the ones involved in critical reporting. Not only do the indicators help in keeping in check the laboratory parameters but they also allow comparison with other laboratories.

Keywords: laboratory quality management, COVID-19, molecular diagnostics, healthcare

Procedia PDF Downloads 153
2974 Fabrication of 3D Scaffold Consisting of Spiral-Like Micro-Sized PCL Struts and Selectively Deposited Nanofibers as a Tissue Regenerative Material

Authors: Gi-Hoon Yang, JongHan Ha, MyungGu Yeo, JaeYoon Lee, SeungHyun Ahn, Hyeongjin Lee, HoJun Jeon, YongBok Kim, Minseong Kim, GeunHyung Kim

Abstract:

Tissue engineering scaffolds must be biocompatible and biodegradable, provide adequate mechanical strength and cell attachment site for proliferation and differentiation. Furthermore, the scaffold morphology (such as pore size, porosity and pore interconnectivity) plays an important role. The electrospinning process has been widely used to fabricate micro/nano-sized fibres. Electrospinning allows for the fabrication of non-woven meshes containing micro- to nano-sized fibers providing high surface-to-volume area for cell attachment. Due to its advantageous characteristics, electrospinning is a useful method for skin, cartilage, bone, and nerve regeneration. In this study, we fabricated PCL scaffolds (SP) consisting of spiral-like struts using 3D melt-plotting system and micro/nanofibers using direct electrospinning writing. By altering the conditions of the conventional melt-plotting method, spiral-like struts were generated. Then, micro/nanofibers were deposited selectively. The control scaffold composed of perpendicular PCL struts was fabricated using the conventional melt-plotting method to compare the cellular activities. The effect on the attached cells (osteoblast-like cells (MG63)) was evaluated depending on the bending instability of the struts. The SP scaffolds showed enhanced biological properties such as initial cell attachment, proliferation and osteogenic differentiation. These results suggest that the SP scaffolds has potential as a bioengineered substitute for soft and hard tissue regeneration.

Keywords: cell attachment, electrospinning, mechanical strength, melt-plotting

Procedia PDF Downloads 309
2973 Scalable CI/CD and Scalable Automation: Assisting in Optimizing Productivity and Fostering Delivery Expansion

Authors: Solanki Ravirajsinh, Kudo Kuniaki, Sharma Ankit, Devi Sherine, Kuboshima Misaki, Tachi Shuntaro

Abstract:

In software development life cycles, the absence of scalable CI/CD significantly impacts organizations, leading to increased overall maintenance costs, prolonged release delivery times, heightened manual efforts, and difficulties in meeting tight deadlines. Implementing CI/CD with standard serverless technologies using cloud services overcomes all the above-mentioned issues and helps organizations improve efficiency and faster delivery without the need to manage server maintenance and capacity. By integrating scalable CI/CD with scalable automation testing, productivity, quality, and agility are enhanced while reducing the need for repetitive work and manual efforts. Implementing scalable CI/CD for development using cloud services like ECS (Container Management Service), AWS Fargate, ECR (to store Docker images with all dependencies), Serverless Computing (serverless virtual machines), Cloud Log (for monitoring errors and logs), Security Groups (for inside/outside access to the application), Docker Containerization (Docker-based images and container techniques), Jenkins (CI/CD build management tool), and code management tools (GitHub, Bitbucket, AWS CodeCommit) can efficiently handle the demands of diverse development environments and are capable of accommodating dynamic workloads, increasing efficiency for faster delivery with good quality. CI/CD pipelines encourage collaboration among development, operations, and quality assurance teams by providing a centralized platform for automated testing, deployment, and monitoring. Scalable CI/CD streamlines the development process by automatically fetching the latest code from the repository every time the process starts, building the application based on the branches, testing the application using a scalable automation testing framework, and deploying the builds. Developers can focus more on writing code and less on managing infrastructure as it scales based on the need. Serverless CI/CD eliminates the need to manage and maintain traditional CI/CD infrastructure, such as servers and build agents, reducing operational overhead and allowing teams to allocate resources more efficiently. Scalable CI/CD adjusts the application's scale according to usage, thereby alleviating concerns about scalability, maintenance costs, and resource needs. Creating scalable automation testing using cloud services (ECR, ECS Fargate, Docker, EFS, Serverless Computing) helps organizations run more than 500 test cases in parallel, aiding in the detection of race conditions, performance issues, and reducing execution time. Scalable CI/CD offers flexibility, dynamically adjusting to varying workloads and demands, allowing teams to scale resources up or down as needed. It optimizes costs by only paying for the resources as they are used and increases reliability. Scalable CI/CD pipelines employ automated testing and validation processes to detect and prevent errors early in the development cycle.

Keywords: achieve parallel execution, cloud services, scalable automation testing, scalable continuous integration and deployment

Procedia PDF Downloads 30
2972 Evaluation of Antibiotic Resistance Profiles of Staphlyococci Isolated from Various Clinical Specimens

Authors: Recep Kesli, Merih Simsek, Cengiz Demir, Onur Turkyilmaz

Abstract:

Objective: Goal of this study was to determine the antibiotic resistance of Staphylococcus aureus (S. aureus) and Methicillin resistant staphylococcus aureus (MRSA) strains isolated at Medical Microbiology Laboratory of ANS Application and Research Hospital, Afyon Kocatepe University, Turkey. Methods: S. aureus strains isolated between October 2012 and September 2016, from various clinical specimens were evaluated retrospectively. S. aureus strains were identified by both the conventional methods and automated identification system -VITEK 2 (bio-Mérieux, Marcy l’etoile, France), and Meticillin resistance was verified using oxacillin disk with disk-diffusion method. Antibiotic resistance testing was performed by Kirby-Bauer disc diffusion method according to CLSI criteria, and intermediate susceptible strains were considered as resistant. Results: Seven hundred S.aureus strains which were isolated from various clinical specimens were included in this study. These strains were mostly isolated from blood culture, tissue, wounds and bronchial aspiration. All of 306 (43,7%) were oxacillin resistant. While all the S.aureus strains were found to be susceptible to vancomycin, teicoplanin, daptomycin and linezolid, 38 (9.6 %), 77 (19.5 %), 116 (29.4 %), 152 (38.6 %) and 28 (7.1 %) were found to be resistant aganist to clindamycin, erythromycin, gentamicin, tetracycline and sulfamethoxazole/trimethoprim, retrospectively. Conclusions: Comparing to the Methicillin sensitive staphylococcus aureus (MSSA) strains, increased resistance rates of, trimethoprim-sulfamethoxazole, clindamycin, erythromycin, gentamicin, and tetracycline were observed among the MRSA strains. In this study, the most effective antibiotic on the total of strains was found to be trimethoprim-sulfamethoxazole, the least effective antibiotic on the total of strains was found to be tetracycline.

Keywords: antibiotic resistance, MRSA, Staphylococcus aureus, VITEK 2

Procedia PDF Downloads 243
2971 Eudesmane-Type Sesquiterpenes from Laggera alata Inhibiting Angiogenesis

Authors: Liang Ning, Chung Hau Yin

Abstract:

Angiogenesis is the process of new blood vessel development. It has been recognized as a therapeutic target for blocking cancer growth four decades ago. Vascular sprouting is initiated by pro-angiogenic factors. Vascular endothelial cell growth factor (VEGF) plays a central role in angiogenic initiation, many patients with cancer or ocular neovascularization have been benefited from anti-VEGF therapy. Emerging approaches impacting in the later stages of vessel remodeling and maturation are expected to improve clinical efficacy. TIE receptor as well as the corresponding angiopoietin ligands, were identified as another endothelial cell specific receptor tyrosine kinase signaling system. Much efforts were made to reduce the activity of angiopoietin-TIE receptor axis. Two eudesmane-type sesquiterpenes from laggera alata, namely, 15-dihydrocostic acid and ilicic acid were found with strong anti-angiogenic properties in zebrafish model. Meanwhile, the mRNA expression levels of VEGFR2 and TIE2 pathway related genes were down-regulated in the sesquiterpenes treated zebrafish embryos. Besides, in human umbilical vein endothelial cells (HUVECs), the sesquiterpenes have the ability to inhibit VEGF-induced HUVECs proliferation and migration at non-toxic concentration. Moreover, angiopoietin-2 induced TIE2 phosphorylation was inhibited by the sesquiterpenes, the inhibitory effect was detected in angiopoietin-1 induced HUVECs proliferation as well. Thus, we hypothesized the anti-angiogenic activity of the compounds may via the inhibition of VEGF and TIE2 related pathways. How the compounds come into play as the pathways inhibitors need to be evaluated in the future.

Keywords: Laggera alata, eudesmane-type sesquiterpene, anti-angiogenesis, VEGF, angiopoietin, TIE2

Procedia PDF Downloads 195
2970 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling

Authors: M. Almutairi, S. Hadjiloucas

Abstract:

The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.

Keywords: harmonics, passive filter, power factor, power quality

Procedia PDF Downloads 299
2969 Investigating the Relationship between Service Quality and Amount of Violations in Community Pharmacies with Their Type of Ownership

Authors: Afshin Azari, Farzad Peiravian, Nazila Yousefi

Abstract:

Introduction: Community pharmacies have been always played an important role in public health. Therefore, having a decent service provided by these pharmacies is of paramount importance for the healthcare system. The issue of pharmacy ownership and its possible impact on the quality of services and amount of violations has been argued for many years, and there are different opinions around this debate. Since, so far, no scientific research has been performed to investigate this issue in Iran, this study aimed to examine the differences between these two types of pharmacies ownership in terms of violations and service quality. Method: This study investigates the impact of two different kinds of pharmacy ownership (pharmacists and non-pharmacist’s ownership) on the pharmacies’ amount of violations and services quality. Pharmacies’ amount of violations was examined using “pharmacy inspection reports” between September 2018 and September 2019, in their distinguishable categories: minor, major and critical violations. Then, service quality was examined using a questionnaire from the perspective of pharmacy customers. Results: Considering violations, there was no evidence to prove a significant relationship between critical violations and major violations with the type of pharmacy ownership. However, in minor violations, the average of violations was higher in pharmacies owned by pharmacists in comparison to their non-pharmacist owned counterparts. Regarding service quality, the results showed that there is no significant relationship between the quality of service and the type of pharmacy ownership. Discussion and Conclusion: In this study, no significant relationship was found between the amount of violations and the type of pharmacy ownership. This could indicate that the pharmacy ownership would not influence the rate of violations. Considering that more inspections have been carried out in non-pharmacist owned pharmacies, it can be concluded that these pharmacies are more under control, and in fact, this monitoring has reduced violations in these pharmacies. The quality of services in the two types of pharmacies were not significantly different from each other, and this shows that non-pharmacist-owned pharmacies also try to maintain the desired level of service in competition with their competitors.

Keywords: pharmacy ownership, quality of service, violation, community pharmacy

Procedia PDF Downloads 160
2968 Factors Influencing Consumer Adoption of Digital Banking Apps in the UK

Authors: Sevelina Ndlovu

Abstract:

Financial Technology (fintech) advancement is recognised as one of the most transformational innovations in the financial industry. Fintech has given rise to internet-only digital banking, a novel financial technology advancement, and innovation that allows banking services through internet applications with no need for physical branches. This technology is becoming a new banking normal among consumers for its ubiquitous and real-time access advantages. There is evident switching and migration from traditional banking towards these fintech facilities, which could possibly pose a systemic risk if not properly understood and monitored. Fintech advancement has also brought about the emergence and escalation of financial technology consumption themes such as trust, security, perceived risk, and sustainability within the banking industry, themes scarcely covered in existing theoretic literature. To that end, the objective of this research is to investigate factors that determine fintech adoption and propose an integrated adoption model. This study aims to establish what the significant drivers of adoption are and develop a conceptual model that integrates technological, behavioral, and environmental constructs by extending the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2). It proposes integrating constructs that influence financial consumption themes such as trust, perceived risk, security, financial incentives, micro-investing opportunities, and environmental consciousness to determine the impact of these factors on the adoption and intention to use digital banking apps. The main advantage of this conceptual model is the consolidation of a greater number of predictor variables that can provide a fuller explanation of the consumer's adoption of digital banking Apps. Moderating variables of age, gender, and income are incorporated. To the best of author’s knowledge, this study is the first that extends the UTAUT2 model with this combination of constructs to investigate user’s intention to adopt internet-only digital banking apps in the UK context. By investigating factors that are not included in the existing theories but are highly pertinent to the adoption of internet-only banking services, this research adds to existing knowledge and extends the generalisability of the UTAUT2 in a financial services adoption context. This is something that fills a gap in knowledge, as highlighted to needing further research on UTAUT2 after reviewing the theory in 2016 from its original version of 2003. To achieve the objectives of this study, this research assumes a quantitative research approach to empirically test the hypotheses derived from existing literature and pilot studies to give statistical support to generalise the research findings for further possible applications in theory and practice. This research is explanatory or casual in nature and uses cross-section primary data collected through a survey method. Convenient and purposive sampling using structured self-administered online questionnaires is used for data collection. The proposed model is tested using Structural Equation Modelling (SEM), and the analysis of primary data collected through an online survey is processed using Smart PLS software with a sample size of 386 digital bank users. The results are expected to establish if there are significant relationships between the dependent and independent variables and establish what the most influencing factors are.

Keywords: banking applications, digital banking, financial technology, technology adoption, UTAUT2

Procedia PDF Downloads 56
2967 Accessibility Analysis of Urban Green Space in Zadar Settlement, Croatia

Authors: Silvija Šiljeg, Ivan Marić, Ante Šiljeg

Abstract:

The accessibility of urban green spaces (UGS) is an integral element in the quality of life. Due to rapid urbanization, UGS studies have become a key element in urban planning. The potential benefits of space for its inhabitants are frequently analysed. A functional transport network system and the optimal spatial distribution of urban green surfaces are the prerequisites for maintaining the environmental equilibrium of the urban landscape. An accessibility analysis was conducted as part of the Urban Green Belts Project (UGB). The development of a GIS database for Zadar was the first step in generating the UGS accessibility indicator. Data were collected using the supervised classification method of multispectral LANDSAT images and manual vectorization of digital orthophoto images (DOF). An analysis of UGS accessibility according to the ANGst standard was conducted in the first phase of research. The accessibility indicator was generated on the basis of seven objective measurements, which included average UGS surface per capita and accessibility according to six functional levels of green surfaces. The generated indicator was compared with subjective measurements obtained by conducting a survey (718 respondents) within statistical units. The collected data reflected individual assessments and subjective evaluations of UGS accessibility. This study highlighted the importance of using objective and subjective measures in the process of understanding the accessibility of urban green surfaces. It may be concluded that when evaluating UGS accessibility, residents emphasize the immediate residential environment, ignoring higher UGS functional levels. It was also concluded that large areas of UGS within a city do not necessarily generate similar satisfaction with accessibility. The heterogeneity of output results may serve as guidelines for the further development of a functional UGS city network.

Keywords: urban green spaces (UGS), accessibility indicator, subjective and objective measurements, Zadar

Procedia PDF Downloads 243
2966 Equity and Diversity in Bangladesh’s Primary Education: Struggling Indigenous Children

Authors: Md Rabiul Islam, Ben Wadham

Abstract:

This paper describes how indigenous students face challenges with various school activities due to inadequate equity and diversity principles in mainstream primary schools in Bangladesh. This study focuses on indigenous students’ interactions with mainstream class teachers and students through teaching-learning activities at public primary schools. Ethnographic research methods guided data collection under a case study methodology in Chittagong Hill Tracts (CHTs) region where maximum indigenous peoples’ inhabitants. The participants (class teachers) shared information through in-depth interviews about their experiences in the four selecting schools. The authors also observed the effects of school activities by use of equity and diversity lens for indigenous students’ situations in those schools. The authors argue that the socio-economic situations of indigenous families are not supportive of the educational development of their children. Similarly, the Bangladesh government does not have enough initiative programs based on equity and diversity principles for fundamental education of indigenous children at rural schools level. Besides this, the conventional teaching system cannot improve the diversification among the students in classrooms. The principles of equity and diversity are not well embedded in professional development of teachers, and using teaching materials in classrooms. The findings suggest that implementing equitable education; there are needed to arrange teachers’ education with equitable knowledge and introducing diversified teaching materials, and implementing teaching through students centered activities that promote the diversification among the multicultural students.

Keywords: case study research, chittagong hill tracts, equity and diversity, Indigenous children

Procedia PDF Downloads 308
2965 Low-Cost Image Processing System for Evaluating Pavement Surface Distress

Authors: Keerti Kembhavi, M. R. Archana, V. Anjaneyappa

Abstract:

Most asphalt pavement condition evaluation use rating frameworks in which asphalt pavement distress is estimated by type, extent, and severity. Rating is carried out by the pavement condition rating (PCR), which is tedious and expensive. This paper presents the development of a low-cost technique for image pavement distress analysis that permits the identification of pothole and cracks. The paper explores the application of image processing tools for the detection of potholes and cracks. Longitudinal cracking and pothole are detected using Fuzzy-C- Means (FCM) and proceeded with the Spectral Theory algorithm. The framework comprises three phases, including image acquisition, processing, and extraction of features. A digital camera (Gopro) with the holder is used to capture pavement distress images on a moving vehicle. FCM classifier and Spectral Theory algorithms are used to compute features and classify the longitudinal cracking and pothole. The Matlab2016Ra Image preparing tool kit utilizes performance analysis to identify the viability of pavement distress on selected urban stretches of Bengaluru city, India. The outcomes of image evaluation with the utilization semi-computerized image handling framework represented the features of longitudinal crack and pothole with an accuracy of about 80%. Further, the detected images are validated with the actual dimensions, and it is seen that dimension variability is about 0.46. The linear regression model y=1.171x-0.155 is obtained using the existing and experimental / image processing area. The R2 correlation square obtained from the best fit line is 0.807, which is considered in the linear regression model to be ‘large positive linear association’.

Keywords: crack detection, pothole detection, spectral clustering, fuzzy-c-means

Procedia PDF Downloads 172
2964 Energy Production with Closed Methods

Authors: Bujar Ismaili, Bahti Ismajli, Venhar Ismaili, Skender Ramadani

Abstract:

In Kosovo, the problem with the electricity supply is huge and does not meet the demands of consumers. Older thermal power plants, which are regarded as big environmental polluters, produce most of the energy. Our experiment is based on the production of electricity using the closed method that does not affect environmental pollution by using waste as fuel that is considered to pollute the environment. The experiment was carried out in the village of Godanc, municipality of Shtime - Kosovo. In the experiment, a production line based on the production of electricity and central heating was designed at the same time. The results are the benefits of electricity as well as the release of temperature for heating with minimal expenses and with the release of 0% gases into the atmosphere. During this experiment, coal, plastic, waste from wood processing, and agricultural wastes were used as raw materials. The method utilized in the experiment allows for the release of gas through pipes and filters during the top-to-bottom combustion of the raw material in the boiler, followed by the method of gas filtration from waste wood processing (sawdust). During this process, the final product is obtained - gas, which passes through the carburetor, which enables the gas combustion process and puts into operation the internal combustion machine and the generator and produces electricity that does not release gases into the atmosphere. The obtained results show that the system provides energy stability without environmental pollution from toxic substances and waste, as well as with low production costs. From the final results, it follows that: in the case of using coal fuel, we have benefited from more electricity and higher temperature release, followed by plastic waste, which also gave good results. The results obtained during these experiments prove that the current problems of lack of electricity and heating can be met at a lower cost and have a clean environment and waste management.

Keywords: energy, heating, atmosphere, waste, gasification

Procedia PDF Downloads 225
2963 Adjusting Electricity Demand Data to Account for the Impact of Loadshedding in Forecasting Models

Authors: Migael van Zyl, Stefanie Visser, Awelani Phaswana

Abstract:

The electricity landscape in South Africa is characterized by frequent occurrences of loadshedding, a measure implemented by Eskom to manage electricity generation shortages by curtailing demand. Loadshedding, classified into stages ranging from 1 to 8 based on severity, involves the systematic rotation of power cuts across municipalities according to predefined schedules. However, this practice introduces distortions in recorded electricity demand, posing challenges to accurate forecasting essential for budgeting, network planning, and generation scheduling. Addressing this challenge requires the development of a methodology to quantify the impact of loadshedding and integrate it back into metered electricity demand data. Fortunately, comprehensive records of loadshedding impacts are maintained in a database, enabling the alignment of Loadshedding effects with hourly demand data. This adjustment ensures that forecasts accurately reflect true demand patterns, independent of loadshedding's influence, thereby enhancing the reliability of electricity supply management in South Africa. This paper presents a methodology for determining the hourly impact of load scheduling and subsequently adjusting historical demand data to account for it. Furthermore, two forecasting models are developed: one utilizing the original dataset and the other using the adjusted data. A comparative analysis is conducted to evaluate forecast accuracy improvements resulting from the adjustment process. By implementing this methodology, stakeholders can make more informed decisions regarding electricity infrastructure investments, resource allocation, and operational planning, contributing to the overall stability and efficiency of South Africa's electricity supply system.

Keywords: electricity demand forecasting, load shedding, demand side management, data science

Procedia PDF Downloads 52
2962 Utilizing the Analytic Hierarchy Process in Improving Performances of Blind Judo

Authors: Hyun Chul Cho, Hyunkyoung Oh, Hyun Yoon, Jooyeon Jin, Jae Won Lee

Abstract:

Identifying, structuring, and racking the most important factors related to improving athletes’ performances could pave the way for improve training system. The purpose of this study was to identify the relative importance factors to improve performance of the of judo athletes with visual impairments, including blindness by using the Analytic Hierarchy Process (AHP). After reviewing the literature, the relative importance of factors affecting performance of the blind judo was selected. A group of expert reviewed the first draft of the questionnaires, and then finally selected performance factors were classified into the major categories of techniques, physical fitness, and psychological categories. Later, a pre-selected experts group was asked to review the final version of questionnaire and confirm the priories of performance factors. The order of priority was determined by performing pairwise comparisons using Expert Choice 2000. Results indicated that “grappling” (.303) and “throwing” (.234) were the most important lower hierarchy factors for blind judo skills. In addition, the most important physical factors affecting performance were “muscular strength and endurance” (.238). Further, among other psychological factors “competitive anxiety” (.393) was important factor that affects performance. It is important to offer psychological skills training to reduce anxiety of judo athletes with visual impairments and blindness, so they can compete in their optimal states. These findings offer insights into what should be considered when determining factors to improve performance of judo athletes with visual impairments and blindness.

Keywords: analytic hierarchy process, blind athlete, judo, sport performance

Procedia PDF Downloads 210
2961 3D Printing of Polycaprolactone Scaffold with Multiscale Porosity Via Incorporation of Sacrificial Sucrose Particles

Authors: Mikaela Kutrolli, Noah S. Pereira, Vanessa Scanlon, Mohamadmahdi Samandari, Ali Tamayol

Abstract:

Bone tissue engineering has drawn significant attention and various biomaterials have been tested. Polymers such as polycaprolactone (PCL) offer excellent biocompatibility, reasonable mechanical properties, and biodegradability. However, PCL scaffolds suffer a critical drawback: a lack of micro/mesoporosity, affecting cell attachment, tissue integration, and mineralization. It also results in a slow degradation rate. While 3D-printing has addressed the issue of macroporosity through CAD-guided fabrication, PCL scaffolds still exhibit poor smaller-scale porosity. To overcome this, we generated composites of PCL, hydroxyapatite (HA), and powdered sucrose (PS). The latter serves as a sacrificial material to generate porous particles after sucrose dissolution. Additionally, we have incorporated dexamethasone (DEX) to boost the PCL osteogenic properties. The resulting scaffolds maintain controlled macroporosity from the lattice print structure but also develop micro/mesoporosity within PCL fibers when exposed to aqueous environments. The study involved mixing PS into solvent-dissolved PCL in different weight ratios of PS to PCL (70:30, 50:50, and 30:70 wt%). The resulting composite was used for 3D printing of scaffolds at room temperature. Printability was optimized by adjusting pressure, speed, and layer height through filament collapse and fusion test. Enzymatic degradation, porogen leaching, and DEX release profiles were characterized. Physical properties were assessed using wettability, SEM, and micro-CT to quantify the porosity (percentage, pore size, and interconnectivity). Raman spectroscopy was used to verify the absence of sugar after leaching. Mechanical characteristics were evaluated via compression testing before and after porogen leaching. Bone marrow stromal cells (BMSCs) behavior in the printed scaffolds was studied by assessing viability, metabolic activity, osteo-differentiation, and mineralization. The scaffolds with a 70% sugar concentration exhibited superior printability and reached the highest porosity of 80%, but performed poorly during mechanical testing. A 50% PS concentration demonstrated a 70% porosity, with an average pore size of 25 µm, favoring cell attachment. No trace of sucrose was found in Raman after leaching the sugar for 8 hours. Water contact angle results show improved hydrophilicity as the sugar concentration increased, making the scaffolds more conductive to cell adhesion. The behavior of bone marrow stromal cells (BMSCs) showed positive viability and proliferation results with an increasing trend of mineralization and osteo-differentiation as the sucrose concentration increased. The addition of HA and DEX also promoted mineralization and osteo-differentiation in the cultures. The integration of PS as porogen at a concentration of 50%wt within PCL scaffolds presents a promising approach to address the poor cell attachment and tissue integration issues of PCL in bone tissue engineering. The method allows for the fabrication of scaffolds with tunable porosity and mechanical properties, suitable for various applications. The addition of HA and DEX further enhanced the scaffolds. Future studies will apply the scaffolds in an in-vivo model to thoroughly investigate their performance.

Keywords: bone, PCL, 3D printing, tissue engineering

Procedia PDF Downloads 44
2960 High School Gain Analytics From National Assessment Program – Literacy and Numeracy and Australian Tertiary Admission Rankin Linkage

Authors: Andrew Laming, John Hattie, Mark Wilson

Abstract:

Nine Queensland Independent high schools provided deidentified student-matched ATAR and NAPLAN data for all 1217 ATAR graduates since 2020 who also sat NAPLAN at the school. Graduating cohorts from the nine schools contained a mean 100 ATAR graduates with previous NAPLAN data from their school. Excluded were vocational students (mean=27) and any ATAR graduates without NAPLAN data (mean=20). Based on Index of Community Socio-Educational Access (ICSEA) prediction, all schools had larger that predicted proportions of their students graduating with ATARs. There were an additional 173 students not releasing their ATARs to their school (14%), requiring this data to be inferred by schools. Gain was established by first converting each student’s strongest NAPLAN domain to a statewide percentile, then subtracting this result from final ATAR. The resulting ‘percentile shift’ was corrected for plausible ATAR participation at each NAPLAN level. Strongest NAPLAN domain had the highest correlation with ATAR (R2=0.58). RESULTS School mean NAPLAN scores fitted ICSEA closely (R2=0.97). Schools achieved a mean cohort gain of two ATAR rankings, but only 66% of students gained. This ranged from 46% of top-NAPLAN decile students gaining, rising to 75% achieving gains outside the top decile. The 54% of top-decile students whose ATAR fell short of prediction lost a mean 4.0 percentiles (or 6.2 percentiles prior to correction for regression to the mean). 71% of students in smaller schools gained, compared to 63% in larger schools. NAPLAN variability in each of the 13 ICSEA1100 cohorts was 17%, with both intra-school and inter-school variation of these values extremely low (0.3% to 1.8%). Mean ATAR change between years in each school was just 1.1 ATAR ranks. This suggests consecutive school cohorts and ICSEA-similar schools share very similar distributions and outcomes over time. Quantile analysis of the NAPLAN/ATAR revealed heteroscedasticity, but splines offered little additional benefit over simple linear regression. The NAPLAN/ATAR R2 was 0.33. DISCUSSION Standardised data like NAPLAN and ATAR offer educators a simple no-cost progression metric to analyse performance in conjunction with their internal test results. Change is expressed in percentiles, or ATAR shift per student, which is layperson intuitive. Findings may also reduce ATAR/vocational stream mismatch, reveal proportions of cohorts meeting or falling short of expectation and demonstrate by how much. Finally, ‘crashed’ ATARs well below expectation are revealed, which schools can reasonably work to minimise. The percentile shift method is neither value-add nor a growth percentile. In the absence of exit NAPLAN testing, this metric is unable to discriminate academic gain from legitimate ATAR-maximizing strategies. But by controlling for ICSEA, ATAR proportion variation and student mobility, it uncovers progression to ATAR metrics which are not currently publicly available. However achieved, ATAR maximisation is a sought-after private good. So long as standardised nationwide data is available, this analysis offers useful analytics for educators and reasonable predictivity when counselling subsequent cohorts about their ATAR prospects.  

Keywords: NAPLAN, ATAR, analytics, measurement, gain, performance, data, percentile, value-added, high school, numeracy, reading comprehension, variability, regression to the mean

Procedia PDF Downloads 61
2959 A Neuroscience-Based Learning Technique: Framework and Application to STEM

Authors: Dante J. Dorantes-González, Aldrin Balsa-Yepes

Abstract:

Existing learning techniques such as problem-based learning, project-based learning, or case study learning are learning techniques that focus mainly on technical details, but give no specific guidelines on learner’s experience and emotional learning aspects such as arousal salience and valence, being emotional states important factors affecting engagement and retention. Some approaches involving emotion in educational settings, such as social and emotional learning, lack neuroscientific rigorousness and use of specific neurobiological mechanisms. On the other hand, neurobiology approaches lack educational applicability. And educational approaches mainly focus on cognitive aspects and disregard conditioning learning. First, authors start explaining the reasons why it is hard to learn thoughtfully, then they use the method of neurobiological mapping to track the main limbic system functions, such as the reward circuit, and its relations with perception, memories, motivations, sympathetic and parasympathetic reactions, and sensations, as well as the brain cortex. The authors conclude explaining the major finding: The mechanisms of nonconscious learning and the triggers that guarantee long-term memory potentiation. Afterward, the educational framework for practical application and the instructors’ guidelines are established. An implementation example in engineering education is given, namely, the study of tuned-mass dampers for earthquake oscillations attenuation in skyscrapers. This work represents an original learning technique based on nonconscious learning mechanisms to enhance long-term memories that complement existing cognitive learning methods.

Keywords: emotion, emotion-enhanced memory, learning technique, STEM

Procedia PDF Downloads 83
2958 Anti-Phosphorylcholine T Cell Dependent Antibody

Authors: M. M. Rahman, A. Liu, A. Frostegard, J. Frostegard

Abstract:

The human immune system plays an essential role in cardiovascular disease (CVD) and atherosclerosis. Our earlier studies showed that major immunocompetent cells including T cells are activated by phosphorylcholine epitope. Further, we have determined for the first time in a clinical cohort that antibodies against phosphorylcholine (anti-PC) are negatively and independently associated with the development of atherosclerosis and thus a low risk of cardiovascular diseases. It is still unknown whether activated T cells play a role in anti-PC production. Here we aim to clarify the role of T cells in anti-PC production. B cell alone, or with CD3 T, CD4 T or with CD8 T cells were cultured in polystyrene plates to examine anti-PC IgM production. In addition to mixed B cell with CD3 T cell culture, B cells with CD3 T cells were also cultured in transwell co-culture plates. Further, B cells alone and mixed B cell with CD3 T cell cultures with or without anti-HLA 2 antibody were cultured for 6 days. Anti-PC IgM was detected by ELISA in independent experiments. More than 8 fold higher levels of anti-PC IgM were detected by ELISA in mixed B cell with CD3 T cell cultures in comparison to B cells alone. After the co-culture of B and CD3 T cells in transwell plates, there were no increased antibody levels indicating that B and T cells need to interact to augment anti-PC IgM production. Furthermore, anti-PC IgM was abolished by anti-HLA 2 blocking antibody in mixed B and CD3 T cells culture. In addition, the lack of increased anti-PC IgM in mixed B with CD8 T cells culture and the increased levels of anti-PC in mixed B with CD4 T cells culture support the role of helper T cell for the anti-PC IgM production. Atherosclerosis is a major cause of cardiovascular diseases, but anti-PC IgM is a protection marker for atherosclerosis development. Understanding the mechanism involved in the anti-PC IgM regulation could play an important role in strategies to raise anti-PC IgM. Studies suggest that anti-PC is T-cell independent antibody, but our study shows the major role of T cell in anti-PC IgM production. Activation of helper T cells by immunization could be a possible mechanism for raising anti-PC levels.

Keywords: anti-PC, atherosclerosis, aardiovascular diseases, phosphorylcholine

Procedia PDF Downloads 334
2957 Design and Optimisation of 2-Oxoglutarate Dioxygenase Expression in Escherichia coli Strains for Production of Bioethylene from Crude Glycerol

Authors: Idan Chiyanzu, Maruping Mangena

Abstract:

Crude glycerol, a major by-product from the transesterification of triacylglycerides with alcohol to biodiesel, is known to have a broad range of applications. For example, its bioconversion can afford a wide range of chemicals including alcohols, organic acids, hydrogen, solvents and intermediate compounds. In bacteria, the 2-oxoglutarate dioxygenase (2-OGD) enzymes are widely found among the Pseudomonas syringae species and have been recognized with an emerging importance in ethylene formation. However, the use of optimized enzyme function in recombinant systems for crude glycerol conversion to ethylene is still not been reported. The present study investigated the production of ethylene from crude glycerol using engineered E. coli MG1655 and JM109 strains. Ethylene production with an optimized expression system for 2-OGD in E. coli using a codon optimized construct of the ethylene-forming gene was studied. The codon-optimization resulted in a 20-fold increase of protein production and thus an enhanced production of the ethylene gas. For a reliable bioreactor performance, the effect of temperature, fermentation time, pH, substrate concentration, the concentration of methanol, concentration of potassium hydroxide and media supplements on ethylene yield was investigated. The results demonstrate that the recombinant enzyme can be used for future studies to exploit the conversion of low-priced crude glycerol into advanced value products like light olefins, and tools including recombineering techniques for DNA, molecular biology, and bioengineering can be used to allowing unlimited the production of ethylene directly from the fermentation of crude glycerol. It can be concluded that recombinant E.coli production systems represent significantly secure, renewable and environmentally safe alternative to thermochemical approach to ethylene production.

Keywords: crude glycerol, bioethylene, recombinant E. coli, optimization

Procedia PDF Downloads 276
2956 Biorisk Management Education for Undergraduates Studying Clinical Microbiology at University in Japan

Authors: Shuji Fujimoto, Fumiko Kojima, Mika Shigematsu

Abstract:

Biorisk management (Biosafety/Biosecurity) is required for anyone working in a clinical laboratory (including medical/clinical research laboratories) where infectious agents and potentially hazardous biological materials are examined/stored. Proper education and training based on international standards of biorisk management should be provided not only as a part of laboratory safety program in work place but also as a part of introductory training at educational institutions for continuity and to elevate overall baseline of the biorisk management. We reported results of the pilot study of biorisk management education for graduate students majored in laboratory diagnostics previously. However, postgraduate education is still late in their profession and the participants’ interview also revealed importance and demands of earlier biorisk management education for undergraduates. The aim of this study is to identify the need for biosafety/biosecurity education and training program which is designed for undergraduate students who are entering the profession in clinical microbiology. We modified the previous program to include more basic topics and explanations (risk management, principles of safe clinical lab practices, personal protective equipment, disinfection, disposal of biological substances) and provided incorporating in the routine educational system for faculty of medical sciences in Kyushu University. The results of the pre and post examinations showed that the knowledge of the students on biorisk control had developed effectively as a proof of effectiveness of the program even in the undergraduate students. Our study indicates that administrating the basic biorisk management program in the earlier stage of learning will add positive impact to the understanding of biosafety to the health professional education.

Keywords: biorisk management, biosafety, biosecurity, clinical microbiology, education for undergraduates

Procedia PDF Downloads 206
2955 Solvent-Aided Dispersion of Tannic Acid to Enhance Flame Retardancy of Epoxy

Authors: Matthew Korey, Jeffrey Youngblood, John Howarter

Abstract:

Background and Significance: Tannic acid (TA) is a bio-based high molecular weight organic, aromatic molecule that has been found to increase thermal stability and flame retardancy of many polymer matrices when used as an additive. Although it is biologically sourced, TA is a pollutant in industrial wastewater streams, and there is a desire to find applications in which to downcycle this molecule after extraction from these streams. Additionally, epoxy thermosets have revolutionized many industries, but are too flammable to be used in many applications without additives which augment their flame retardancy (FR). Many flame retardants used in epoxy thermosets are synthesized from petroleum-based monomers leading to significant environmental impacts on the industrial scale. Many of these compounds also have significant impacts on human health. Various bio-based modifiers have been developed to improve the FR of the epoxy resin; however, increasing FR of the system without tradeoffs with other properties has proven challenging, especially for TA. Methodologies: In this work, TA was incorporated into the thermoset by use of solvent-exchange using methyl ethyl ketone, a co-solvent for TA, and epoxy resin. Samples were then characterized optically (UV-vis spectroscopy and optical microscopy), thermally (thermogravimetric analysis and differential scanning calorimetry), and for their flame retardancy (mass loss calorimetry). Major Findings: Compared to control samples, all samples were found to have increased thermal stability. Further, the addition of tannic acid to the polymer matrix by the use of solvent greatly increased the compatibility of the additive in epoxy thermosets. By using solvent-exchange, the highest loading level of TA found in literature was achieved in this work (40 wt%). Conclusions: The use of solvent-exchange shows promises for circumventing the limitations of TA in epoxy.

Keywords: sustainable, flame retardant, epoxy, tannic acid

Procedia PDF Downloads 121
2954 Linkages of Environment with the Health Condition of Poor Women and Children in the Urban Areas of India

Authors: Barsharani Maharana

Abstract:

India is the country that shelters the largest number of poor. One of the major areas of concern in India is the unsatisfactory situation of the poor in social developmental and health parameters, not only in rural areas which are partly devoid of the facilities but also in the urban areas where the facilities are insufficient to provide services of a satisfactory quality. Objectives: 1) to examine the association between the environmental condition and health condition among poor women in urban areas. 2) to find out the significance of the effect of environment on the child health among the poor children. 3) to present the scenario of poor among highly urbanized and less urbanized states with respect to the health and environment. Data: data from National Family Health survey-3 and census are used to fulfill the objectives. Methodology: In this study, the standard of living condition of people living in urban areas is computed by taking some household characteristics and assets. People possessing low standard of living are considered as poor. Bivariate and multivariate analysis are employed to examine the effect of environment on poor women and children. A geographical information system is used to present the health and environmental condition of poor in highly and less urbanized states. Results: The findings reveal that the poor women who are not accessed to improved source of water, and sanitation facility are facing more health problems. Children who are living in a dirty environment and are not accessed to improved source of drinking water, among them prevalence of diarrhea and fever is found to be high. As well, the health condition of poor in highly urbanized states is dreadful. Policy implications: Government should emphasize on the implementation of programs regarding the improvement in the infrastructural facilities and health care treatment of urban poor.

Keywords: environment, urban poor, health, sanitation

Procedia PDF Downloads 271
2953 Acceptance of Health Information Application in Smart National Identity Card (SNIC) Using a New I-P Framework

Authors: Ismail Bile Hassan, Masrah Azrifah Azmi Murad

Abstract:

This study discovers a novel framework of individual level technology adoption known as I-P (Individual- Privacy) towards Smart National Identity Card health information application. Many countries introduced smart national identity card (SNIC) with various applications such as health information application embedded inside it. However, the degree to which citizens accept and use some of the embedded applications in smart national identity remains unknown to many governments and application providers as well. Moreover, the previous studies revealed that the factors of trust, perceived risk, privacy concern and perceived credibility need to be incorporated into more comprehensive models such as extended Unified Theory of Acceptance and Use of Technology known as UTAUT2. UTAUT2 is a mainly widespread and leading theory existing in the information system literature up to now. This research identifies factors affecting the citizens’ behavioural intention to use health information application embedded in SNIC and extends better understanding on the relevant factors that the government and the application providers would need to consider in predicting citizens’ new technology acceptance in the future. We propose a conceptual framework by combining the UTAUT2 and Privacy Calculus Model constructs and also adding perceived credibility as a new variable. The proposed framework may provide assistance to any government planning, decision, and policy makers involving e-government projects. The empirical study may be conducted in the future to provide proof and empirically validate this I-P framework.

Keywords: unified theory of acceptance and use of technology (UTAUT) model, UTAUT2 model, smart national identity card (SNIC), health information application, privacy calculus model (PCM)

Procedia PDF Downloads 456
2952 Tokyo Skyscrapers: Technologically Advanced Structures in Seismic Areas

Authors: J. Szolomicki, H. Golasz-Szolomicka

Abstract:

The architectural and structural analysis of selected high-rise buildings in Tokyo is presented in this paper. The capital of Japan is the most densely populated city in the world and moreover is located in one of the most active seismic zones. The combination of these factors has resulted in the creation of sophisticated designs and innovative engineering solutions, especially in the field of design and construction of high-rise buildings. The foreign architectural studios (as, for Jean Nouvel, Kohn Pedesen Associates, Skidmore, Owings & Merill) which specialize in the designing of skyscrapers, played a major role in the development of technological ideas and architectural forms for such extraordinary engineering structures. Among the projects completed by them, there are examples of high-rise buildings that set precedents for future development. An essential aspect which influences the design of high-rise buildings is the necessity to take into consideration their dynamic reaction to earthquakes and counteracting wind vortices. The need to control motions of these buildings, induced by the force coming from earthquakes and wind, led to the development of various methods and devices for dissipating energy which occur during such phenomena. Currently, Japan is a global leader in seismic technologies which safeguard seismic influence on high-rise structures. Due to these achievements the most modern skyscrapers in Tokyo are able to withstand earthquakes with a magnitude of over seven degrees at the Richter scale. Damping devices applied are of a passive, which do not require additional power supply or active one which suppresses the reaction with the input of extra energy. In recent years also hybrid dampers were used, with an additional active element to improve the efficiency of passive damping.

Keywords: core structures, damping system, high-rise building, seismic zone

Procedia PDF Downloads 165