Search results for: database
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1581

Search results for: database

321 Testing the Life Cycle Theory on the Capital Structure Dynamics of Trade-Off and Pecking Order Theories: A Case of Retail, Industrial and Mining Sectors

Authors: Freddy Munzhelele

Abstract:

Setting: the empirical research has shown that the life cycle theory has an impact on the firms’ financing decisions, particularly the dividend pay-outs. Accordingly, the life cycle theory posits that as a firm matures, it gets to a level and capacity where it distributes more cash as dividends. On the other hand, the young firms prioritise investment opportunities sets and their financing; thus, they pay little or no dividends. The research on firms’ financing decisions also demonstrated, among others, the adoption of trade-off and pecking order theories on the dynamics of firms capital structure. The trade-off theory talks to firms holding a favourable position regarding debt structures particularly as to the cost and benefits thereof; and pecking order is concerned with firms preferring a hierarchical order as to choosing financing sources. The case of life cycle hypothesis explaining the financial managers’ decisions as regards the firms’ capital structure dynamics appears to be an interesting link, yet this link has been neglected in corporate finance research. If this link is to be explored as an empirical research, the financial decision-making alternatives will be enhanced immensely, since no conclusive evidence has been found yet as to the dynamics of capital structure. Aim: the aim of this study is to examine the impact of life cycle theory on the capital structure dynamics trade-off and pecking order theories of firms listed in retail, industrial and mining sectors of the JSE. These sectors are among the key contributors to the GDP in the South African economy. Design and methodology: following the postpositivist research paradigm, the study is quantitative in nature and utilises secondary data obtainable from the financial statements of sampled firm for the period 2010 – 2022. The firms’ financial statements will be extracted from the IRESS database. Since the data will be in panel form, a combination of the static and dynamic panel data estimators will used to analyse data. The overall data analyses will be done using STATA program. Value add: this study directly investigates the link between the life cycle theory and the dynamics of capital structure decisions, particularly the trade-off and pecking order theories.

Keywords: life cycle theory, trade-off theory, pecking order theory, capital structure, JSE listed firms

Procedia PDF Downloads 39
320 A Review: The Impact of Core Quality the Empirical Review of Critical Factors on the Causes of Delay in Road Constructions Projects in the GCC Countries

Authors: Sulaiman Al-Hinai, Setyawan Widyarto

Abstract:

The aim of this study is to identify the critically dominating factors on the delays of road constructions in the GCC countries and their effects on project delivery in Arab countries. Towards the achieved of the objectives the study used the empirical literature from the all relevant online sources and database as many as possible. The findings of this study have summarized and short listed of the success factors in the two categories such as internal and external factors have caused to be influenced to delay of road constructions in the Arab regions. However, in the category of internal factors, there are 63 factors short listed from seven group of factors which has revealed to effects on the delay of road constructions especially, the consultant related factors, the contractor related factors, designed related factors, client related factors, labor related factors, material related issues, equipment related issues respectively. Moreover, for external related factors are also considered to summarized especially natural disaster (flood, hurricanes and cyclone etc.), conflict, war, global financial crisis, compensation delay to affected property owner, price fluctuated, unexpected ground conditions (soil and high-water level), changing of government regulations and laws, delays in obtaining permission from municipality, loss of time by traffic control and restrictions at job site, problem with inhabitant of community, delays in providing service from utilities (water and electricity’s) and accident during constructions accordingly. The present study also concluded the effects of above factors which has delay road constructions through increasing of cost and overrun it, taken overtime, creating of disputes, going for lawsuits, finally happening of abandon of projects. Thus, the present study has given the following recommendations to overcome of above problems by increasing of detailed site investigations, ensure careful monitoring and regular meetings, effective site management, collaborative working and effective coordination’s, proper and comprehensive planning and scheduling and ensure full and intensive commitment from all parties accordingly.

Keywords: Arab GCC countries, critical success factors, road constructions delay, project management

Procedia PDF Downloads 97
319 Quantification of the Erosion Effect on Small Caliber Guns: Experimental and Numerical Analysis

Authors: Dhouibi Mohamed, Stirbu Bogdan, Chabotier André, Pirlot Marc

Abstract:

Effects of erosion and wear on the performance of small caliber guns have been analyzed throughout numerical and experimental studies. Mainly, qualitative observations were performed. Correlations between the volume change of the chamber and the maximum pressure are limited. This paper focuses on the development of a numerical model to predict the maximum pressure evolution when the interior shape of the chamber changes in the different weapon’s life phases. To fulfill this goal, an experimental campaign, followed by a numerical simulation study, is carried out. Two test barrels, « 5.56x45mm NATO » and « 7.62x51mm NATO,» are considered. First, a Coordinate Measuring Machine (CMM) with a contact scanning probe is used to measure the interior profile of the barrels after each 300-shots cycle until their worn out. Simultaneously, the EPVAT (Electronic Pressure Velocity and Action Time) method with a special WEIBEL radar are used to measure: (i) the chamber pressure, (ii) the action time, (iii) and the bullet velocity in each barrel. Second, a numerical simulation study is carried out. Thus, a coupled interior ballistic model is developed using the dynamic finite element program LS-DYNA. In this work, two different models are elaborated: (i) coupled Eularien Lagrangian method using fluid-structure interaction (FSI) techniques and a coupled thermo-mechanical finite element using a lumped parameter model (LPM) as a subroutine. Those numerical models are validated and checked through three experimental results, such as (i) the muzzle velocity, (ii) the chamber pressure, and (iii) the surface morphology of fired projectiles. Results show a good agreement between experiments and numerical simulations. Next, a comparison between the two models is conducted. The projectile motions, the dynamic engraving resistances and the maximum pressures are compared and analyzed. Finally, using this obtained database, a statistical correlation between the muzzle velocity, the maximum pressure and the chamber volume is established.

Keywords: engraving process, finite element analysis, gun barrel erosion, interior ballistics, statistical correlation

Procedia PDF Downloads 176
318 Resilient Design Solutions for Megathermal Climates of the Global South

Authors: Bobuchi Ken-Opurum

Abstract:

The impacts of climate change on urban settlements is growing. In the global south, communities are even more vulnerable and suffer there is an increased vulnerability from due to climate change disasters such as flooding and high temperatures. This is primarily due to high intensity rainfall, low-lying coasts, inadequate infrastructure, and limited resources. According to the Emergency Events Database, floods were the leading cause of disaster -based deaths in the global south between 2006 and 2015. This includes deaths from heat stress related health outcomes. Adapting to climate vulnerabilities is paramount in reducing the significant redevelopment costs from climate disasters. Governments and urban planners provide top-down approaches such as evacuation, and disaster and emergency communication. While they address infrastructure and public services, they are not always able to address the immediate and critical day to day needs of poor and vulnerable populations. There is growing evidence that some bottom-up strategies and grassroots initiatives of self-build housing such as in urban informal settlements are successful in coping and adapting to hydroclimatic impacts. However, these research findings are not consolidated and the evaluation of the resilience outcomes of the bottom-up strategies are limited. Using self-build housing as a model for sustainable and resilient urban planning, this research aimed to consolidate the flood and heat stress resilient design solutions, analyze the effectiveness of these solutions, and develop guidelines and methods for adopting these design solutions into mainstream housing in megathermal climates. The methodological approach comprised of analyses of over 40 ethnographic based peer reviewed literature, white papers, and reports between the years 2000 and 2019 to identify coping strategies and grassroots initiatives that have been applied by occupants and communities of the global south. The results of the research provide a consolidated source and prioritized list of the best bottom-up strategies for communities in megathermal climates to improve the lives of people in some of the most vulnerable places in the world.

Keywords: resilient, design, megathermal, climate change

Procedia PDF Downloads 95
317 An Automatic Large Classroom Attendance Conceptual Model Using Face Counting

Authors: Sirajdin Olagoke Adeshina, Haidi Ibrahim, Akeem Salawu

Abstract:

large lecture theatres cannot be covered by a single camera but rather by a multicamera setup because of their size, shape, and seating arrangements. Although, classroom capture is achievable through a single camera. Therefore, a design and implementation of a multicamera setup for a large lecture hall were considered. Researchers have shown emphasis on the impact of class attendance taken on the academic performance of students. However, the traditional method of carrying out this exercise is below standard, especially for large lecture theatres, because of the student population, the time required, sophistication, exhaustiveness, and manipulative influence. An automated large classroom attendance system is, therefore, imperative. The common approach in this system is face detection and recognition, where known student faces are captured and stored for recognition purposes. This approach will require constant face database updates due to constant changes in the facial features. Alternatively, face counting can be performed by cropping the localized faces on the video or image into a folder and then count them. This research aims to develop a face localization-based approach to detect student faces in classroom images captured using a multicamera setup. A selected Haar-like feature cascade face detector trained with an asymmetric goal to minimize the False Rejection Rate (FRR) relative to the False Acceptance Rate (FAR) was applied on Raspberry Pi 4B. A relationship between the two factors (FRR and FAR) was established using a constant (λ) as a trade-off between the two factors for automatic adjustment during training. An evaluation of the proposed approach and the conventional AdaBoost on classroom datasets shows an improvement of 8% TPR (output result of low FRR) and 7% minimization of the FRR. The average learning speed of the proposed approach was improved with 1.19s execution time per image compared to 2.38s of the improved AdaBoost. Consequently, the proposed approach achieved 97% TPR with an overhead constraint time of 22.9s compared to 46.7s of the improved Adaboost when evaluated on images obtained from a large lecture hall (DK5) USM.

Keywords: automatic attendance, face detection, haar-like cascade, manual attendance

Procedia PDF Downloads 48
316 Companies and Transplant Tourists to China

Authors: Pavel Porubiak, Lukas Kudlacek

Abstract:

Introduction Transplant tourism is a controversial method of obtaining an organ, and that goes all the more for a country such as China, where sources of evidence point out to the possibility of organs being harvested illegally. This research aimed at listing the individual countries these tourists come from, or which medical companies sell transplant related products in there, with China being used as an example. Materials and methods The methodology of scoping study was used for both parts of the research. The countries from which transplant tourists come to China were identified by a search through existing medical studies in the NCBI PubMed database, listed under the keyword ‘transplantation in China’. The search was not limited by any other criteria, but only the studies available for free – directly on PubMed or a linked source – were used. Other research studies on this topic were considered as well. The companies were identified through multiple methods. The first was an online search focused on medical companies and their products. The Bloomberg Service, used by stock brokers worldwide, was then used to identify the revenue of these companies in individual countries – if data were available – as well as their business presence in China. A search through the U.S. Securities and Exchange Commission was done in the same way. Also a search on the Chinese internet was done, and to obtain more results, a second online search was done as well. The results and discussion The extensive search has identified 14 countries with transplant tourists to China. The search for a similar studies or reports resulted in finding additional six countries. The companies identified by our research also amounted to 20. Eight of them are sourcing China with organ preservation products – of which one is just trying to enter the Chinese market, six with immunosuppressive drugs, four with transplant diagnostics, one with medical robots which Chinese doctors use for transplantation as well, and another one trying to enter the Chinese market with a consumable-type product also related to transplantation. The conclusion The question of the ethicality of transplant tourism may be very pressing, since as the research shows, just the sheer amount of participating countries, sourcing transplant tourists to another one, amounts to 20. The identified companies are facing risks due to the nature of transplantation business in China, as officially executed prisoners are used as sources, and widely cited pieces of evidence point out to illegal organ harvesting. Similar risks and ethical questions are also relevant to the countries sourcing the transplant tourists to China.

Keywords: China, illegal organ harvesting, transplant tourism, organ harvesting technology

Procedia PDF Downloads 107
315 The Role and Effects of Communication on Occupational Safety: A Review

Authors: Pieter A. Cornelissen, Joris J. Van Hoof

Abstract:

The interest in improving occupational safety started almost simultaneously with the beginning of the Industrial Revolution. Yet, it was not until the late 1970’s before the role of communication was considered in scientific research regarding occupational safety. In recent years the importance of communication as a means to improve occupational safety has increased. Not only as communication might have a direct effect on safety performance and safety outcomes, but also as it can be viewed as a major component of other important safety-related elements (e.g., training, safety meetings, leadership). And while safety communication is an increasingly important topic in research, its operationalization is often vague and differs among studies. This is not only problematic when comparing results, but also in applying these results to practice and the work floor. By means of an in-depth analysis—building on an existing dataset—this review aims to overcome these problems. The initial database search yielded 25.527 articles, which was reduced to a research corpus of 176 articles. Focusing on the 37 articles of this corpus that addressed communication (related to safety outcomes and safety performance), the current study will provide a comprehensive overview of the role and effects of safety communication and outlines the conditions under which communication contributes to a safer work environment. The study shows that in literature a distinction is commonly made between safety communication (i.e., the exchange or dissemination of safety-related information) and feedback (i.e. a reactive form of communication). And although there is a consensus among researchers that both communication and feedback positively affect safety performance, there is a debate about the directness of this relationship. Whereas some researchers assume a direct relationship between safety communication and safety performance, others state that this relationship is mediated by safety climate. One of the key findings is that despite the strongly present view that safety communication is a formal and top-down safety management tool, researchers stress the importance of open communication that encourages and allows employees to express their worries, experiences, views, and share information. This raises questions with regard to other directions (e.g., bottom-up, horizontal) and forms of communication (e.g., informal). The current review proposes a framework to overcome the often vague and different operationalizations of safety communication. The proposed framework can be used to characterize safety communication in terms of stakeholders, direction, and characteristics of communication (e.g., medium usage).

Keywords: communication, feedback, occupational safety, review

Procedia PDF Downloads 267
314 Uncertainty Evaluation of Erosion Volume Measurement Using Coordinate Measuring Machine

Authors: Mohamed Dhouibi, Bogdan Stirbu, Chabotier André, Marc Pirlot

Abstract:

Internal barrel wear is a major factor affecting the performance of small caliber guns in their different life phases. Wear analysis is, therefore, a very important process for understanding how wear occurs, where it takes place, and how it spreads with the aim on improving the accuracy and effectiveness of small caliber weapons. This paper discusses the measurement and analysis of combustion chamber wear for a small-caliber gun using a Coordinate Measuring Machine (CMM). Initially, two different NATO small caliber guns: 5.56x45mm and 7.62x51mm, are considered. A Micura Zeiss Coordinate Measuring Machine (CMM) equipped with the VAST XTR gold high-end sensor is used to measure the inner profile of the two guns every 300-shot cycle. The CMM parameters, such us (i) the measuring force, (ii) the measured points, (iii) the time of masking, and (iv) the scanning velocity, are investigated. In order to ensure minimum measurement error, a statistical analysis is adopted to select the reliable CMM parameters combination. Next, two measurement strategies are developed to capture the shape and the volume of each gun chamber. Thus, a task-specific measurement uncertainty (TSMU) analysis is carried out for each measurement plan. Different approaches of TSMU evaluation have been proposed in the literature. This paper discusses two different techniques. The first is the substitution method described in ISO 15530 part 3. This approach is based on the use of calibrated workpieces with similar shape and size as the measured part. The second is the Monte Carlo simulation method presented in ISO 15530 part 4. Uncertainty evaluation software (UES), also known as the Virtual Coordinate Measuring Machine (VCMM), is utilized in this technique to perform a point-by-point simulation of the measurements. To conclude, a comparison between both approaches is performed. Finally, the results of the measurements are verified through calibrated gauges of several dimensions specially designed for the two barrels. On this basis, an experimental database is developed for further analysis aiming to quantify the relationship between the volume of wear and the muzzle velocity of small caliber guns.

Keywords: coordinate measuring machine, measurement uncertainty, erosion and wear volume, small caliber guns

Procedia PDF Downloads 120
313 Trends in Endoscopic Versus Open Treatment of Carpal Tunnel Syndrome in Rheumatoid Arthritis Patients

Authors: Arman Kishan, Sanjay Kubsad, Steve Li, Mark Haft, Duc Nguyen, Dawn Laporte

Abstract:

Objective: Carpal tunnel syndrome can be managed surgically with endoscopic or open carpal tunnel release (CTR). Rheumatoid arthritis (RA) is a known risk factor for Carpal Tunnel Syndrome (CTS) and is believed to be related to compression of the median nerve secondary to inflammation. We aimed to analyze national trends, outcomes, and patient-specific comorbidities associated with ECTR and OCTR in patients with RA. Methods: A retrospective cohort study was conducted using the PearlDiver database, identifying 683 RA patients undergoing ECTR and 4234 undergoing OCTR between 2010 and 2014. Demographic data, comorbidities, and complication rates were analyzed. Univariate and multivariable analyses assessed differences between the treatment methods. Results:  Patients with RA undergoing ECTR in comparison to OCTR had no significant differences in medical comorbidities such as hypertension, obesity, chronic kidney disease, hypothyroidism and diabetes mellitus. Patients in the ECTR group reported a risk ratio of 1.44 (95%CI: 1.10-1.89, p=0.01) of requiring repeat procedures within 90 days of the initial procedure. Five-year trends in ECTR and OCTR procedures reported a combined annual growth rate of 5.6% and 13.15, respectively. Conclusion: Endoscopic and open approaches to CTR are important considerations in surgical planning. RA and ECTR have previously been identified as independent risk factors for revision CTR. Our study has identified the 90-day risk of repeat procedures to be elevated in the ECTR group in comparison to the OCTR group. Additionally, the growth of OCTR procedures has outpaced the growth of ECTR procedures in the same period, likely in response to the trend of ECTR leading to higher rates of repeat procedures. The need for revision following ECTR in patients with RA could be related to chronic inflammation leading to transverse carpal ligament thickening and concomitant tenosynovitis. Future directions could include further characterization of repeat procedures performed in this subset of patients. 

Keywords: endoscopic treatment of carpal tunnel syndrome, open treatment of carpal tunnel syndrome, rheumatoid arthritis, trends analysis, carpal tunnel syndrome

Procedia PDF Downloads 33
312 Transformation of ectA Gene From Halomonas elongata in Tomato Plant

Authors: Narayan Moger, Divya B., Preethi Jambagi, Krishnaveni C. K., Apsana M. R., B. R. Patil, Basvaraj Bagewadi

Abstract:

Salinity is one of the major threats to world food security. Considering the requirement for salt tolerant crop plants in the present study was undertaken to clone and transferred the salt tolerant ectA gene from marine ecosystem into agriculture crop system to impart salinity tolerance. Ectoine is the compatible solute which accumulates in the cell membrane, is known to be involved in salt tolerance activity in most of the Halophiles. The present situation is insisting to development of salt tolerant transgenic lines to combat abiotic stress. In this background, the investigation was conducted to develop transgenic tomato lines by cloning and transferring of ectA gene is an ectoine derivative capable of enzymatic action for the production of acetyl-diaminobutyric acid. The gene ectA is involved in maintaining the osmotic balance of plants. The PCR amplified ectA gene (579bp) was cloned into T/A cloning vector (pTZ57R/T). The construct pDBJ26 containing ectA gene was sequenced by using gene specific forward and reverse primers. Sequence was analyzed using BLAST algorithm to check similarity of ectA gene with other isolates. Highest homology of 99.66 per cent was found with ectA gene sequences of isolates Halomonas elongata with the available sequence information in NCBI database. The ectA gene was further sub cloned into pRI101-AN plant expression vector and transferred into E. coli DH5α for its maintenance. Further pDNM27 was mobilized into A. tumefaciens LBA4404 through tri-parental mating system. The recombinant Agrobacterium containing pDNM27 was transferred into tomato plants through In planta plant transformation method. Out of 300 seedlings, co-cultivated only twenty-seven plants were able to well establish under the greenhouse condition. Among twenty-seven transformants only twelve plants showed amplification with gene specific primers. Further work must be extended to evaluate the transformants at T1 and T2 generations for ectoine accumulation, salinity tolerance, plant growth and development and yield.

Keywords: salinity, computable solutes, ectA, transgenic, in planta transformation

Procedia PDF Downloads 56
311 Evaluation of a Remanufacturing for Lithium Ion Batteries from Electric Cars

Authors: Achim Kampker, Heiner H. Heimes, Mathias Ordung, Christoph Lienemann, Ansgar Hollah, Nemanja Sarovic

Abstract:

Electric cars with their fast innovation cycles and their disruptive character offer a high degree of freedom regarding innovative design for remanufacturing. Remanufacturing increases not only the resource but also the economic efficiency by a prolonged product life time. The reduced power train wear of electric cars combined with high manufacturing costs for batteries allow new business models and even second life applications. Modular and intermountable designed battery packs enable the replacement of defective or outdated battery cells, allow additional cost savings and a prolongation of life time. This paper discusses opportunities for future remanufacturing value chains of electric cars and their battery components and how to address their potentials with elaborate designs. Based on a brief overview of implemented remanufacturing structures in different industries, opportunities of transferability are evaluated. In addition to an analysis of current and upcoming challenges, promising perspectives for a sustainable electric car circular economy enabled by design for remanufacturing are deduced. Two mathematical models describe the feasibility of pursuing a circular economy of lithium ion batteries and evaluate remanufacturing in terms of sustainability and economic efficiency. Taking into consideration not only labor and material cost but also capital costs for equipment and factory facilities to support the remanufacturing process, cost benefit analysis prognosticate that a remanufacturing battery can be produced more cost-efficiently. The ecological benefits were calculated on a broad database from different research projects which focus on the recycling, the second use and the assembly of lithium ion batteries. The results of this calculations show a significant improvement by remanufacturing in all relevant factors especially in the consumption of resources and greenhouse warming potential. Exemplarily suitable design guidelines for future remanufacturing lithium ion batteries, which consider modularity, interfaces and disassembly, are used to illustrate the findings. For one guideline, potential cost improvements were calculated and upcoming challenges are pointed out.

Keywords: circular economy, electric mobility, lithium ion batteries, remanufacturing

Procedia PDF Downloads 314
310 In Silico Screening, Identification and Validation of Cryptosporidium hominis Hypothetical Protein and Virtual Screening of Inhibitors as Therapeutics

Authors: Arpit Kumar Shrivastava, Subrat Kumar, Rajani Kanta Mohapatra, Priyadarshi Soumyaranjan Sahu

Abstract:

Computational approaches to predict structure, function and other biological characteristics of proteins are becoming more common in comparison to the traditional methods in drug discovery. Cryptosporidiosis is a major zoonotic diarrheal disease particularly in children, which is caused primarily by Cryptosporidium hominis and Cryptosporidium parvum. Currently, there are no vaccines for cryptosporidiosis and recommended drugs are not effective. With the availability of complete genome sequence of C. hominis, new targets have been recognized for the development of effective and better drugs and/or vaccines. We identified a unique hypothetical epitopic protein in C. hominis genome through BLASTP analysis. A 3D model of the hypothetical protein was generated using I-Tasser server through threading methodology. The quality of the model was validated through Ramachandran plot by PROCHECK server. The functional annotation of the hypothetical protein through DALI server revealed structural similarity with human Transportin 3. Phylogenetic analysis for this hypothetical protein also showed C. hominis hypothetical protein (CUV04613) was the closely related to human transportin 3 protein. The 3D protein model is further subjected to virtual screening study with inhibitors from the Zinc Database by using Dock Blaster software. Docking study reported N-(3-chlorobenzyl) ethane-1,2-diamine as the best inhibitor in terms of docking score. Docking analysis elucidated that Leu 525, Ile 526, Glu 528, Glu 529 are critical residues for ligand–receptor interactions. The molecular dynamic simulation was done to access the reliability of the binding pose of inhibitor and protein complex using GROMACS software at 10ns time point. Trajectories were analyzed at each 2.5 ns time interval, among which, H-bond with LEU-525 and GLY- 530 are significantly present in MD trajectories. Furthermore, antigenic determinants of the protein were determined with the help of DNA Star software. Our study findings showed a great potential in order to provide insights in the development of new drug(s) or vaccine(s) for control as well as prevention of cryptosporidiosis among humans and animals.

Keywords: cryptosporidium hominis, hypothetical protein, molecular docking, molecular dynamics simulation

Procedia PDF Downloads 340
309 Probabilistic Life Cycle Assessment of the Nano Membrane Toilet

Authors: A. Anastasopoulou, A. Kolios, T. Somorin, A. Sowale, Y. Jiang, B. Fidalgo, A. Parker, L. Williams, M. Collins, E. J. McAdam, S. Tyrrel

Abstract:

Developing countries are nowadays confronted with great challenges related to domestic sanitation services in view of the imminent water scarcity. Contemporary sanitation technologies established in these countries are likely to pose health risks unless waste management standards are followed properly. This paper provides a solution to sustainable sanitation with the development of an innovative toilet system, called Nano Membrane Toilet (NMT), which has been developed by Cranfield University and sponsored by the Bill & Melinda Gates Foundation. The particular technology converts human faeces into energy through gasification and provides treated wastewater from urine through membrane filtration. In order to evaluate the environmental profile of the NMT system, a deterministic life cycle assessment (LCA) has been conducted in SimaPro software employing the Ecoinvent v3.3 database. The particular study has determined the most contributory factors to the environmental footprint of the NMT system. However, as sensitivity analysis has identified certain critical operating parameters for the robustness of the LCA results, adopting a stochastic approach to the Life Cycle Inventory (LCI) will comprehensively capture the input data uncertainty and enhance the credibility of the LCA outcome. For that purpose, Monte Carlo simulations, in combination with an artificial neural network (ANN) model, have been conducted for the input parameters of raw material, produced electricity, NOX emissions, amount of ash and transportation of fertilizer. The given analysis has provided the distribution and the confidence intervals of the selected impact categories and, in turn, more credible conclusions are drawn on the respective LCIA (Life Cycle Impact Assessment) profile of NMT system. Last but not least, the specific study will also yield essential insights into the methodological framework that can be adopted in the environmental impact assessment of other complex engineering systems subject to a high level of input data uncertainty.

Keywords: sanitation systems, nano-membrane toilet, lca, stochastic uncertainty analysis, Monte Carlo simulations, artificial neural network

Procedia PDF Downloads 200
308 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea

Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim

Abstract:

Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.

Keywords: deep learning, algae concentration, remote sensing, satellite

Procedia PDF Downloads 152
307 Web Map Service for Fragmentary Rockfall Inventory

Authors: M. Amparo Nunez-Andres, Nieves Lantada

Abstract:

One of the most harmful geological risks is rockfalls. They cause both economic lost, damaged in buildings and infrastructures, and personal ones. Therefore, in order to estimate the risk of the exposed elements, it is necessary to know the mechanism of this kind of events, since the characteristics of the rock walls, to the propagation of fragments generated by the initial detached rock mass. In the framework of the research RockModels project, several inventories of rockfalls were carried out along the northeast of the Spanish peninsula and the Mallorca island. These inventories have general information about the events, although the important fact is that they contained detailed information about fragmentation. Specifically, the IBSD (Insitu Block Size Distribution) is obtained by photogrammetry from drone or TLS (Terrestrial Laser Scanner) and the RBSD (Rock Block Size Distribution) from the volume of the fragment in the deposit measured by hand. In order to share all this information with other scientists, engineers, members of civil protection, and stakeholders, it is necessary a platform accessible from the internet and following interoperable standards. In all the process, open-software have been used: PostGIS 2.1., Geoserver, and OpenLayers library. In the first step, a spatial database was implemented to manage all the information. We have used the data specifications of INSPIRE for natural risks adding specific and detailed data about fragmentation distribution. The next step was to develop a WMS with Geoserver. A previous phase was the creation of several views in PostGIS to show the information at different scales of visualization and with different degrees of detail. In the first view, the sites are identified with a point, and basic information about the rockfall event is facilitated. In the next level of zoom, at medium scale, the convex hull of the rockfall appears with its real shape and the source of the event and fragments are represented by symbols. The queries at this level offer a major detail about the movement. Eventually, the third level shows all elements: deposit, source, and blocks, in their real size, if it is possible, and in their real localization. The last task was the publication of all information in a web mapping site (www.rockdb.upc.edu) with data classified by levels using libraries in JavaScript as OpenLayers.

Keywords: geological risk, web mapping, WMS, rockfalls

Procedia PDF Downloads 133
306 A Semi-Markov Chain-Based Model for the Prediction of Deterioration of Concrete Bridges in Quebec

Authors: Eslam Mohammed Abdelkader, Mohamed Marzouk, Tarek Zayed

Abstract:

Infrastructure systems are crucial to every aspect of life on Earth. Existing Infrastructure is subjected to degradation while the demands are growing for a better infrastructure system in response to the high standards of safety, health, population growth, and environmental protection. Bridges play a crucial role in urban transportation networks. Moreover, they are subjected to high level of deterioration because of the variable traffic loading, extreme weather conditions, cycles of freeze and thaw, etc. The development of Bridge Management Systems (BMSs) has become a fundamental imperative nowadays especially in the large transportation networks due to the huge variance between the need for maintenance actions, and the available funds to perform such actions. Deterioration models represent a very important aspect for the effective use of BMSs. This paper presents a probabilistic time-based model that is capable of predicting the condition ratings of the concrete bridge decks along its service life. The deterioration process of the concrete bridge decks is modeled using semi-Markov process. One of the main challenges of the Markov Chain Decision Process (MCDP) is the construction of the transition probability matrix. Yet, the proposed model overcomes this issue by modeling the sojourn times based on some probability density functions. The sojourn times of each condition state are fitted to probability density functions based on some goodness of fit tests such as Kolmogorov-Smirnov test, Anderson Darling, and chi-squared test. The parameters of the probability density functions are obtained using maximum likelihood estimation (MLE). The condition ratings obtained from the Ministry of Transportation in Quebec (MTQ) are utilized as a database to construct the deterioration model. Finally, a comparison is conducted between the Markov Chain and semi-Markov chain to select the most feasible prediction model.

Keywords: bridge management system, bridge decks, deterioration model, Semi-Markov chain, sojourn times, maximum likelihood estimation

Procedia PDF Downloads 180
305 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data

Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca

Abstract:

In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.

Keywords: citizen science, data quality filtering, species distribution models, trait profiles

Procedia PDF Downloads 167
304 Optimization of Fermentation Conditions for Extracellular Production of the Oncolytic Enzyme, L-Asparaginase, by New Subsp. Streptomyces Rochei Subsp. Chromatogenes NEAE-K Using Response Surface Methodology under Solid State Fermentation

Authors: Noura El-Ahmady El-Naggar

Abstract:

L-asparaginase is an important enzyme as therapeutic agents used in combination therapy with other drugs in the treatment of acute lymphoblastic leukemia in children. L-asparaginase producing actinomycete strain, NEAE-K, was isolated from soil sample and identified on the basis of morphological, cultural, physiological and biochemical properties, together with 16S rDNA sequence as new subsp. Streptomyces rochei subsp. chromatogenes NEAE-K and sequencing product (1532 bp) was deposited in the GenBank database under accession number KJ200343. The study was conducted to screen parameters affecting the production of L-asparaginase by Streptomyces rochei subsp. chromatogenes NEAE-K on solid state fermentation using Plackett–Burman experimental design. Sixteen different independent variables including incubation time, moisture content, inoculum size, temperature, pH, soybean meal+ wheat bran, dextrose, fructose, L-asparagine, yeast extract, KNO3, K2HPO4, MgSO4.7H2O, NaCl, FeSO4. 7H2O, CaCl2, and three dummy variables were screened in Plackett–Burman experimental design of 20 trials. The most significant independent variables affecting enzyme production (dextrose, L-asparagine and K2HPO4) were further optimized by the central composite design. As a result, a medium of the following formula is the optimum for producing an extracellular L-asparaginase by Streptomyces rochei subsp. chromatogenes NEAE-K from solid state fermentation: g/L (soybean meal+ wheat bran 15, dextrose 3, fructose 4, L-asparagine 8, yeast extract 2, KNO3 1, K2HPO4 2, MgSO4.7H2O 0.5, NaCl 0.1, FeSO4. 7H2O 0.02, CaCl2 0.01), incubation time 7 days, moisture content 50%, inoculum size 3 mL, temperature 30°C, pH 8.5.

Keywords: streptomyces rochei subsp. chromatogenes neae-k, 16s rrna, identification, solid state fermentation, l-asparaginase production, plackett-burman design, central composite design

Procedia PDF Downloads 381
303 Use of Telehealth for Facilitating the Diagnostic Assessment of Autism Spectrum Disorder: A Scoping Review

Authors: Manahil Alfuraydan, Jodie Croxall, Lisa Hurt, Mike Kerr, Sinead Brophy

Abstract:

Autism Spectrum Disorder (ASD) is a developmental condition characterised by impairment in terms of social communication, social interaction, and a repetitive or restricted pattern of interest, behaviour, and activity. There is a significant delay between seeking help and a confirmed diagnosis of ASD. This may result in delay in receiving early intervention services, which are critical for positive outcomes. The long wait times also cause stress for the individuals and their families. Telehealth potentially offers a way of improving the diagnostic pathway for ASD. This review of the literature aims to examine which telehealth approaches have been used in the diagnosis and assessment of autism in children and adults, whether they are feasible and acceptable, and how they compare with face-to-face diagnosis and assessment methods. A comprehensive search of following databases- MEDLINE, CINAHL Plus with Full text, Business Sources Complete, Web of Science, Scopus, PsycINFO and trail and systematic review databases including Cochrane Library, Health Technology Assessment, Database of Abstracts and Reviews of Effectiveness and NHS Economic Evaluation was conducted, combining the terms of autism and telehealth from 2000 to 2018. A total of 10 studies were identified for inclusion in the review. This review of the literature found there to be two methods of using telehealth: (a) video conferencing to enable teams in different areas to consult with the families and to assess the child/adult in real time and (b) a video upload to a web portal that enables the clinical assessment of behaviours in the family home. The findings were positive, finding there to be high agreement in terms of the diagnosis between remote methods and face to face methods and with high levels of satisfaction among the families and clinicians. This field is in the very early stages, and so only studies with small sample size were identified, but the findings suggest that there is potential for telehealth methods to improve assessment and diagnosis of autism used in conjunction with existing methods, especially for those with clear autism traits and adults with autism. Larger randomised controlled trials of this technology are warranted.

Keywords: assessment, autism spectrum disorder, diagnosis, telehealth

Procedia PDF Downloads 101
302 The Diurnal and Seasonal Relationships of Pedestrian Injuries Secondary to Motor Vehicles in Young People

Authors: Amina Akhtar, Rory O'Connor

Abstract:

Introduction: There remains significant morbidity and mortality in young pedestrians hit by motor vehicles, even in the era of pedestrian crossings and speed limits. The aim of this study was to compare incidence and injury severity of motor vehicle-related pedestrian trauma according to time of day and season in a young population, based on the supposition that injuries would be more prevalent during dusk and dawn and during autumn and winter. Methods: Data was retrieved for patients between 10-25 years old from the National Trauma Audit and Research Network (TARN) database who had been involved as pedestrians in motor vehicle accidents between 2015-2020. The incidence of injuries, their severity (using the Injury Severity Score [ISS]), hospital transfer time, and mortality were analysed according to the hours of daylight, darkness, and season. Results: The study identified a seasonal pattern, showing that autumn was the predominant season and led to 34.9% of injuries, with a further 25.4% in winter in comparison to spring and summer, with 21.4% and 18.3% of injuries, respectively. However, visibility alone was not a sufficient factor as 49.5% of injuries occurred during the time of darkness, while 50.5% occurred during daylight. Importantly, the greatest injury rate (number of injuries/hour) occurred between 1500-1630, correlating to school pick-up times. A further significant relationship between injury severity score (ISS) and daylight was demonstrated (p-value= 0.0124), with moderate injuries (ISS 9-14) occurring most commonly during the day (72.7%) and more severe injuries (ISS>15) occurred during the night (55.8%). Conclusion: We have identified a relationship between time of day and the frequency and severity of pedestrian trauma in young people. In addition, particular time groupings correspond to the greatest injury rate, suggesting that reduced visibility coupled with school pick-up times may play a significant role. This could be addressed through a targeted public health approach to implementing change. We recommend targeted public health measures to improve road safety that focus on these times and that increase the visibility of children combined with education for drivers.

Keywords: major trauma, paediatric trauma, road traffic accidents, diurnal pattern

Procedia PDF Downloads 70
301 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 280
300 Exploratory Case Study: Judicial Discretion and Political Statements Transforming the Actions of the Commissioner for the South African Revenue Service

Authors: Werner Roux Uys

Abstract:

The Commissioner for the South African Revenue Service (SARS) holds a high position of trust in South African society and a lack of trust by taxpayers in the Commissioner’s actions or conduct could compromise SARS’ management of public finances. Tax morality – which is implicit in the social contract between taxpayers and the state – includes distinct phenomena that can cause a breakdown if there is a perceived lack of action on the part of the Commissioner to ensure public finances are kept safe. To promote tax morality, the Commissioner must support the judiciary in the exercise of its discretion to punish fraudulent tax activities and corrupt tax practices. For several years the political meddling in the Commissioner’s actions and conduct have caused perceived abuse of power at SARS, and taxpayers believed their hard-earned income paid over to SARS would be fruitless and wasteful expenditure. The purpose of this article is to identify and analyse previous decisions held by the South African judiciary regarding the Commissioner’s actions and conduct in tax matters, as well as consider important political statements and newspaper bulletins for the purpose of this research. The study applies a qualitative research approach and exploratory case study technique. Keywords were selected and inserted in the LexisNexis electronic database to systematically identify applicable case law where the ratio decidendi of the court referred to the actions and/or conduct of the Commissioner. Specific real-life statements, including political statements and newspaper bulletins, were selected to support the topic at hand. The purpose of the study is to educate the public about the perceptions that have transformed taxpayers’ behaviour towards the Commissioner for SARS since South Africa’s fledgling constitutional democracy was inaugurated in 1994. The study adds to the literature by identifying key characteristics or distinct phenomena regarding the actions and conduct of the Commissioner affecting taxpayers’ behaviour, including discretionary decision-making. From the findings, it emerged that SARS must abide by its (own) laws and that there is a need to educate not only South African taxpayers about tax morality, but also the public in general.

Keywords: commissioner, SARS, action and conduct, judiciary, discretionry, decsion-making

Procedia PDF Downloads 46
299 Orthopedic Trauma in Newborn Babies

Authors: Joanna Maj, Awais Hussain, Lyndsey Vu, Catherine Roxas

Abstract:

Background: Bone injuries in babies are common conditions that arise during delivery. Fractures of the clavicle, humerus, femur, and skull are the most common neonatal bone injuries sustained from labor and delivery. During operative deliveries, zealous tractions, ineffective delivery techniques, improper uterine incision, and inadequate relaxation of the uterus can lead to bone fractures in the newborn. Neonatal anatomy is unique. Just as children are not mini-adults, newborns are not mini children. A newborn’s anatomy and physiology are significantly different from a pediatric patient's. In this paper, we describe common orthopedic trauma in newborn babies. We provide a comprehensive overview of the different types of bone injuries in newborns. We hypothesize that the rate of bone fractures sustained at birth is higher in cases of operative deliveries. Methods: Relevant literature was selected by using the PubMed database. Search terms included orthopedic conditions in newborns, neonatal anatomy, and bone fractures in neonates during operative deliveries. Inclusion criteria included age, gender, race, type of bone injury and progression of bone injury. Exclusion criteria were limited in the medical history of cases reviewed and comorbidities. Results: This review finds that a clavicle fracture is the most common type of neonatal orthopedic injury sustained at birth in both operative and non-operative deliveries. We confirm the hypothesis that infants born via operative deliveries have a significantly higher rate of bone fractures than non-cesarean section deliveries. Conclusion: Newborn babies born via operative deliveries have a higher rate of bone fractures of the clavicle, humerus, and femur. A clavicle bone fracture in newborns is most common during emergency operative deliveries in new mothers. We conclude that infants born via an operative delivery sustained more bone injuries than infants born via non-cesarean section deliveries.

Keywords: clavicle fracture, humerus fracture, neonates, newborn orthopedics, orthopedic surgery, pediatrics, orthopedic trauma, orthopedic trauma during delivery, cesarean section, obstetrics, neonatal anatomy, neonatal fractures, operative deliveries, labor and delivery, bone injuries in neonates

Procedia PDF Downloads 74
298 Transient Level in the Surge Chamber at the Robert-bourassa Generating Station

Authors: Maryam Kamali Nezhad

Abstract:

The Robert-Bourassa development (LG-2), the first to be built on the Grande Rivière, comprises two sets of eight turbines- generator units each, the East and West powerhouses. Each powerhouse has two tailrace tunnels with an average length of about 1178 m. The LG-2A powerhouse houses 6 turbine-generator units. The water is discharged through two tailrace tunnels with a length of about 1330 m. The objective of this work, at RB (LG-2), is; 1) to establish a new maximum transient level in the surge chamber, 2) to define the new maximum equipment flow rate for the future turbine-generator units, 3) to ensure safe access to various intervention locations in the surge chamber. The transient levels under normal operating conditions at the RB plant were determined in 2001 by the Hydraulics Unit of HQE using the "Chamber" software. It is a one-dimensional mass oscillation calculation software; it is used to determine the variation of the water level in the equilibrium chamber located downstream of a power plant during the load shedding of the power plant units; it can also be used in the case of an equilibrium stack upstream of a power plant. The RB (LG-2) plant study is based on the theoretical nominal geometry of the chamber and the tailrace tunnels and the flow-level relationship at the outlet of the galleries established during design. The software is used in such a way that the results have an acceptable margin of safety, especially with respect to the maximum transient level (e.g., resumption of flow at an inopportune time), to take into account the turbulent and three-dimensional aspects of the actual flow in the chamber. Note that the transient levels depend on the water levels in the river and in the steady-state equilibrium chambers. These data are established in the HQP CRP database and updated from time to time. The maximum transient levels in the RB-East and RB-West powerhouses surge chamber were revised based on the latest update (set 4) of in-river rating curves and steady-state surge chamber water levels. The results of the revision were also used to update the technical advice on the operating conditions for the aforementioned surge chamber access while considering revisions to the calculated water levels.

Keywords: generating station, surge chamber, maximum transient level, hydroelectric power station, turbine-generator, reservoir

Procedia PDF Downloads 56
297 Application of Social Media for Promoting Library and Information Services: A Case Study of Library Science Professionals of India

Authors: Payel Saha

Abstract:

Social media is playing an important role for dissemination of information in society. In 21st century most people have a smart phone and used different social media tools like Facebook, Twitter, Instagram, WhatsApp, Skype etc. in day to day life. It is rapidly growing web-based tool for everyone to share thoughts, ideas and knowledge globally using internet. The study highlights the current use of social media tools for promoting library and information services of Library and Information Professionals of India, which are working in Library. The study was conducted during November, 2017. A structured questionnaire was prepared using google docs and shared using different mailing list, sent to individual email IDs and sharing with other social media tools. Only 90 responses received from the different states of India and analyzed via MS-Excel. The data receive from 17 states and 3 union territories of India; however most of the respondents has come from the states Odisha 23, Himachal Pradesh 14 and Assam 10. The results revealed that out 90 respondents 37 Female and 53 male categories and also majority of respondents 71 have come from academic library followed by special library 15, Public library 3 and corporate library 1 respondent. The study indicates that, out of 90 respondent’s majority of 53 of respondents said that their Library have a social media account while 39 of respondents have not their Library social media account. The study also inform that Facebook, YouTube, Google+, LinkedIn, Twitter and Instagram are using by the LIS professional of India and Facebook 86 was popular social media tool among the other social media tools. Furthermore, respondent reported that they are using social media tools for sharing photos of events and programs of library 72, followed by tips for using different services 64, posting of new arrivals 56, tutorials of database 35 and send brief updates to patrons 32, announcement of library holidays 22. It was also reported by respondents that they are sharing information about scholarships training programs and marketing of library events etc. The study furthermore identify that lack of time is the major problem while using social media with 53 of respondents followed by low speed of internet 35, too many social media tools to learn 17 and some 3 respondents reported that there is no problem while using social media tools. The results also revealed that, majority of the respondents reported that they are using social media tools in daily basis 71 followed by weekly basis 16. It was followed by monthly 1 respondent and other 2 of the respondents. In summary, this study is expected to be useful in further promoting the social media for dissemination of library and information services to the general public.

Keywords: application of social media, India, promoting library services, library professionals

Procedia PDF Downloads 126
296 Research on Quality Assurance in African Higher Education: A Bibliometric Mapping from 1999 to 2019

Authors: Luís M. João, Patrício Langa

Abstract:

The article reviews the literature on quality assurance (QA) in African higher education studies (HES) conducted through a bibliometric mapping of published papers between 1999 and 2019. Specifically, the article highlights the nuances of knowledge production in four scientific databases: Scopus, Web of Science (WoS), African Journal Online (AJOL), and Google Scholar. The analysis included 531 papers, of which 127 are from Scopus, 30 are from Web of Science, 85 are from African Journal Online, and 259 are from Google Scholar. In essence, 284 authors wrote these papers from 231 institutions and 69 different countries (i.e., Africa=54 and outside Africa=15). Results indicate the existing knowledge. This analysis allows the readers to understand the growth and development of the field during the two-decade period, identify key contributors, and observe potential trends or gaps in the research. The paper employs bibliometric mapping as its primary analytical lens. By utilizing this method, the study quantitatively assesses the publications related to QA in African HES, helping to identify patterns, collaboration networks, and disparities in research output. The bibliometric approach allows for a systematic and objective analysis of large datasets, offering a comprehensive view of the knowledge production in the field. Furthermore, the study highlights the lack of shared resources available to enhance quality in higher education institutions (HEIs) in Africa. This finding underscores the importance of promoting collaborative research efforts, knowledge exchange, and capacity building within the region to improve the overall quality of higher education. The paper argues that despite the growing quantity of QA research in African higher education, there are challenges related to citation impact and access to high-impact publication avenues for African researchers. It emphasises the need to promote collaborative research and resource-sharing to enhance the quality of HEIs in Africa. The analytical lenses of bibliometric mapping and the examination of publication players' scenarios contribute to a comprehensive understanding of the field and its implications for African higher education.

Keywords: Africa, bibliometric research, higher education studies, quality assurance, scientific database, systematic review

Procedia PDF Downloads 22
295 Using Life Cycle Assessment in Potable Water Treatment Plant: A Colombian Case Study

Authors: Oscar Orlando Ortiz Rodriguez, Raquel A. Villamizar-G, Alexander Araque

Abstract:

There is a total of 1027 municipal development plants in Colombia, 70% of municipalities had Potable Water Treatment Plants (PWTPs) in urban areas and 20% in rural areas. These PWTPs are typically supplied by surface waters (mainly rivers) and resort to gravity, pumping and/or mixed systems to get the water from the catchment point, where the first stage of the potable water process takes place. Subsequently, a series of conventional methods are applied, consisting in a more or less standardized sequence of physicochemical and, sometimes, biological treatment processes which vary depending on the quality of the water that enters the plant. These processes require energy and chemical supplies in order to guarantee an adequate product for human consumption. Therefore, in this paper, we applied the environmental methodology of Life Cycle Assessment (LCA) to evaluate the environmental loads of a potable water treatment plant (PWTP) located in northeastern Colombia following international guidelines of ISO 14040. The different stages of the potable water process, from the catchment point through pumping to the distribution network, were thoroughly assessed. The functional unit was defined as 1 m³ of water treated. The data were analyzed through the database Ecoinvent v.3.01, and modeled and processed in the software LCA-Data Manager. The results allowed determining that in the plant, the largest impact was caused by Clarifloc (82%), followed by Chlorine gas (13%) and power consumption (4%). In this context, the company involved in the sustainability of the potable water service should ideally reduce these environmental loads during the potable water process. A strategy could be the use of Clarifloc can be reduced by applying coadjuvants or other coagulant agents. Also, the preservation of the hydric source that supplies the treatment plant constitutes an important factor, since its deterioration confers unfavorable features to the water that is to be treated. By concluding, treatment processes and techniques, bioclimatic conditions and culturally driven consumption behavior vary from region to region. Furthermore, changes in treatment processes and techniques are likely to affect the environment during all stages of a plant’s operation cycle.

Keywords: climate change, environmental impact, life cycle assessment, treated water

Procedia PDF Downloads 201
294 Identification and Molecular Profiling of A Family I Cystatin Homologue from Sebastes schlegeli Deciphering Its Putative Role in Host Immunity

Authors: Don Anushka Sandaruwan Elvitigala, P. D. S. U. Wickramasinghe, Jehee Lee

Abstract:

Cystatins are a large superfamily of proteins which act as reversible inhibitors of cysteine proteases. Papain proteases and cysteine cathepsins are predominant substrates of cystatins. Cystatin superfamily can be further clustered into three groups as Stefins, Cystatins, and Kininogens. Among them, stefines are also known as family 1 cystatins which harbors cystatin Bs and cystatin As. In this study, a homologue of family one cystatins more close to cystatin Bs was identified from Korean black rockfish (Sebastes schlegeli) using a prior constructed cDNA (complementary deoxyribonucleic acid) database and designated as RfCyt1. The full-length cDNA of RfCyt1 consisted of 573 bp, with a coding region of 294 bp. It comprised a 5´-untranslated region (UTR) of 55 bp, and 3´-UTR of 263 bp. The coding sequence encodes a polypeptide consisting of 97 amino acids with a predicted molecular weight of 11kDa and theoretical isoelectric point of 6.3. The RfCyt1 shared homology with other teleosts and vertebrate species and consisted conserved features of cystatin family signature including single cystatin-like domain, cysteine protease inhibitory signature of pentapeptide (QXVXG) consensus sequence and N-terminal two conserved neighboring glycine (⁸GG⁹) residues. As expected, phylogenetic reconstruction developed using the neighbor-joining method showed that RfCyt1 is clustered with the cystatin family 1 members, in which more closely with its teleostan orthologues. An SYBR Green qPCR (quantitative polymerase chain reaction) assay was performed to quantify the RfCytB transcripts in different tissues in healthy and immune stimulated fish. RfCyt1 was ubiquitously expressed in all tissue types of healthy animals with gill and spleen being the highest. Temporal expression of RfCyt1 displayed significant up-regulation upon infection with Aeromonas salmonicida. Recombinantly expressed RfCyt1 showed concentration-dependent papain inhibitory activity. Collectively these findings evidence for detectable protease inhibitory and immunity relevant roles of RfCyt1 in Sebastes schlegeli.

Keywords: Sebastes schlegeli, family 1 cystatin, immune stimulation, expressional modulation

Procedia PDF Downloads 112
293 Demand-Side Financing for Thai Higher Education: A Reform Towards Sustainable Development

Authors: Daral Maesincee, Jompol Thongpaen

Abstract:

Thus far, most of the decisions made within the walls of Thai higher education (HE) institutions have primarily been supply-oriented. With the current supply-driven, itemized HE financing systems, the nation is struggling to systemically produce high-quality manpower that serves the market’s needs, often resulting in education mismatches and unemployment – particularly in science, technology, and innovation (STI)-related fields. With the COVID-19 pandemic challenges widening the education inequality (accessibility and quality) gap, HE becomes even more unobtainable for underprivileged students, permanently leaving some out of the system. Therefore, Thai HE needs a new financing system that produces the “right people” for the “right occupations” through the “right ways,” regardless of their socioeconomic backgrounds, and encourages the creation of non-degree courses to tackle these ongoing challenges. The “Demand-Side Financing for Thai Higher Education” policy aims to do so by offering a new paradigm of HE resource allocation via two main mechanisms: i) standardized formula-based unit-cost subsidizations that is specific to each study field and ii) student loan programs that respond to the “demand signals” from the labor market and the students, that are in line with the country’s priorities. Through in-dept reviews, extensive studies, and consultations with various experts, education committees, and related agencies, i) the method of demand signal analysis is identified, ii) the unit-cost of each student in the sample study fields is approximated, iii) the method of budget analysis is formulated, iv) the interagency workflows are established, and v) a supporting information database is created to suggest the number of graduates each HE institution can potentially produce, the study fields and skillsets that are needed by the labor market, the employers’ satisfaction with the graduates, and each study field’s employment rates. By responding to the needs of all stakeholders, this policy is expected to steer Thai HE toward producing more STI-related manpower in order to uplift Thai people’s quality of life and enhance the nation’s global competitiveness. This policy is currently in the process of being considered by the National Education Transformation Committee and the Higher Education Commission.

Keywords: demand-side financing, higher education resource, human capital, higher education

Procedia PDF Downloads 177
292 Influence of Nanomaterials on the Properties of Shape Memory Polymeric Materials

Authors: Katielly Vianna Polkowski, Rodrigo Denizarte de Oliveira Polkowski, Cristiano Grings Herbert

Abstract:

The use of nanomaterials in the formulation of polymeric materials modifies their molecular structure, offering an infinite range of possibilities for the development of smart products, being of great importance for science and contemporary industry. Shape memory polymers are generally lightweight, have high shape recovery capabilities, they are easy to process and have properties that can be adapted for a variety of applications. Shape memory materials are active materials that have attracted attention due to their superior damping properties when compared to conventional structural materials. The development of methodologies capable of preparing new materials, which use graphene in their structure, represents technological innovation that transforms low-cost products into advanced materials with high added value. To obtain an improvement in the shape memory effect (SME) of polymeric materials, it is possible to use graphene in its composition containing low concentration by mass of graphene nanoplatelets (GNP), graphene oxide (GO) or other functionalized graphene, via different mixture process. As a result, there was an improvement in the SME, regarding the increase in the values of maximum strain. In addition, the use of graphene contributes to obtaining nanocomposites with superior electrical properties, greater crystallinity, as well as resistance to material degradation. The methodology used in the research is Systematic Review, scientific investigation, gathering relevant studies on influence of nanomaterials on the properties of shape memory polymeric, using the literature database as a source and study methods. In the present study, a systematic reviewwas performed of all papers published from 2014 to 2022 regarding graphene and shape memory polymeric througha search of three databases. This study allows for easy identification of themost relevant fields of study with respect to graphene and shape memory polymeric, as well as the main gaps to beexplored in the literature. The addition of graphene showed improvements in obtaining higher values of maximum deformation of the material, attributed to a possible slip between stacked or agglomerated nanostructures, as well as an increase in stiffness due to the increase in the degree of phase separation that results in a greater amount physical cross-links, referring to the formation of shortrange rigid domains.

Keywords: graphene, shape memory, smart materials, polymers, nanomaterials

Procedia PDF Downloads 51