Search results for: static torque transmission capability
269 Exploring a Cross-Sectional Analysis Defining Social Work Leadership Competencies in Social Work Education and Practice
Authors: Trevor Stephen, Joshua D. Aceves, David Guyer, Jona Jacobson
Abstract:
As a profession, social work has much to offer individuals, groups, and organizations. A multidisciplinary approach to understanding and solving complex challenges and a commitment to developing and training ethical practitioners outlines characteristics of a profession embedded with leadership skills. This presentation will take an overview of the historical context of social work leadership, examine social work as a unique leadership model composed of its qualities and theories that inform effective leadership capability as it relates to our code of ethics. Reflect critically on leadership theories and their foundational comparison. Finally, a look at recommendations and implementation to social work education and practice. Similar to defining leadership, there is no universally accepted definition of social work leadership. However, some distinct traits and characteristics are essential. Recent studies help set the stage for this research proposal because they measure views on effective social work leadership among social work and non-social leaders and followers. However, this research is interested in working backward from that approach and examining social workers' leadership preparedness perspectives based solely on social work training, competencies, values, and ethics. Social workers understand how to change complex structures and challenge resistance to change to improve the well-being of organizations and those they serve. Furthermore, previous studies align with the idea of practitioners assessing their skill and capacity to engage in leadership but not to lead. In addition, this research is significant because it explores aspiring social work leaders' competence to translate social work practice into direct leadership skills. The research question seeks to answer whether social work training and competencies are sufficient to determine whether social workers believe they possess the capacity and skill to engage in leadership practice. Aim 1: Assess whether social workers have the capacity and skills to assume leadership roles. Aim 2: Evaluate how the development of social workers is sufficient in defining leadership. This research intends to reframe the misconception that social workers do not possess the capacity and skills to be effective leaders. On the contrary, social work encompasses a framework dedicated to lifelong development and growth. Social workers must be skilled, competent, ethical, supportive, and empathic. These are all qualities and traits of effective leadership, whereas leaders are in relation with others and embody partnership and collaboration with followers and stakeholders. The proposed study is a cross-sectional quasi-experimental survey design that will include the distribution of a multi-level social work leadership model and assessment tool. The assessment tool aims to help define leadership in social work using a Likert scale model. A cross-sectional research design is appropriate for answering the research questions because the measurement survey will help gather data using a structured tool. Other than the proposed social work leadership measurement tool, there is no other mechanism based on social work theory and designed to measure the capacity and skill of social work leadership.Keywords: leadership competencies, leadership education, multi-level social work leadership model, social work core values, social work leadership, social work leadership education, social work leadership measurement tool
Procedia PDF Downloads 172268 Barriers to Business Model Innovation in the Agri-Food Industry
Authors: Pia Ulvenblad, Henrik Barth, Jennie Cederholm BjöRklund, Maya Hoveskog, Per-Ola Ulvenblad
Abstract:
The importance of business model innovation (BMI) is widely recognized. This is also valid for firms in the agri-food industry, closely connected to global challenges. Worldwide food production will have to increase 70% by 2050 and the United Nations’ sustainable development goals prioritize research and innovation on food security and sustainable agriculture. The firms of the agri-food industry have opportunities to increase their competitive advantage through BMI. However, the process of BMI is complex and the implementation of new business models is associated with high degree of risk and failure. Thus, managers from all industries and scholars need to better understand how to address this complexity. Therefore, the research presented in this paper (i) explores different categories of barriers in research literature on business models in the agri-food industry, and (ii) illustrates categories of barriers with empirical cases. This study is addressing the rather limited understanding on barriers for BMI in the agri-food industry, through a systematic literature review (SLR) of 570 peer-reviewed journal articles that contained a combination of ‘BM’ or ‘BMI’ with agriculture-related and food-related terms (e.g. ‘agri-food sector’) published in the period 1990-2014. The study classifies the barriers in several categories and illustrates the identified barriers with ten empirical cases. Findings from the literature review show that barriers are mainly identified as outcomes. It can be assumed that a perceived barrier to growth can often be initially exaggerated or underestimated before being challenged by appropriate measures or courses of action. What may be considered by the public mind to be a barrier could in reality be very different from an actual barrier that needs to be challenged. One way of addressing barriers to growth is to define barriers according to their origin (internal/external) and nature (tangible/intangible). The framework encompasses barriers related to the firm (internal addressing in-house conditions) or to the industrial or national levels (external addressing environmental conditions). Tangible barriers can include asset shortages in the area of equipment or facilities, while human resources deficiencies or negative willingness towards growth are examples of intangible barriers. Our findings are consistent with previous research on barriers for BMI that has identified human factors barriers (individuals’ attitudes, histories, etc.); contextual barriers related to company and industry settings; and more abstract barriers (government regulations, value chain position, and weather). However, human factor barriers – and opportunities - related to family-owned businesses with idealistic values and attitudes and owning the real estate where the business is situated, are more frequent in the agri-food industry than other industries. This paper contributes by generating a classification of the barriers for BMI as well as illustrating them with empirical cases. We argue that internal barriers such as human factors barriers; values and attitudes are crucial to overcome in order to develop BMI. However, they can be as hard to overcome as for example institutional barriers such as governments’ regulations. Implications for research and practice are to focus on cognitive barriers and to develop the BMI capability of the owners and managers of agri-industry firms.Keywords: agri-food, barriers, business model, innovation
Procedia PDF Downloads 232267 Risk Factors for Determining Anti-HBcore to Hepatitis B Virus Among Blood Donors
Authors: Tatyana Savchuk, Yelena Grinvald, Mohamed Ali, Ramune Sepetiene, Dinara Sadvakassova, Saniya Saussakova, Kuralay Zhangazieva, Dulat Imashpayev
Abstract:
Introduction. The problem of viral hepatitis B (HBV) takes a vital place in the global health system. The existing risk of HBV transmission through blood transfusions is associated with transfusion of blood taken from infected individuals during the “serological window” period or from patients with latent HBV infection, the marker of which is anti-HBcore. In the absence of information about other markers of hepatitis B, the presence of anti-HBcore suggests that a person may be actively infected or has suffered hepatitis B in the past and has immunity. Aim. To study the risk factors influencing the positive anti-HBcore indicators among the donor population. Materials and Methods. The study was conducted in 2021 in the Scientific and Production Center of Transfusiology of the Ministry of Healthcare in Kazakhstan. The samples taken from blood donors were tested for anti-HBcore, by CLIA on the Architect i2000SR (ABBOTT). A special questionnaire was developed for the blood donors’ socio-demographic characteristics. Statistical analysis was conducted by the R software (version 4.1.1, USA, 2021). Results.5709 people aged 18 to 66 years were included in the study, the proportion of men and women was 68.17% and 31.83%, respectively. The average age of the participants was 35.7 years. A weighted multivariable mixed effects logistic regression analysis showed that age (p<0.001), ethnicity (p<0.05), and marital status (p<0.05) were statistically associated with anti-HBcore positivity. In particular, analysis adjusting for gender, nationality, education, marital status, family history of hepatitis, blood transfusion, injections, and surgical interventions, with a one-year increase in age (adjOR=1.06, 95%CI:1.05-1.07), showed an 6% growth in odds of having anti-HBcore positive results. Those who were russian ethnicity (adjOR=0.65, 95%CI:0.46-0.93) and representatives of other nationality groups (adjOR=0.56, 95%CI:0.37-0.85) had lower odds of having anti-HBcore when compared to Kazakhs when controlling for other covariant variables. Among singles, the odds of having a positive anti-HBcore were lower by 29% (adjOR = 0.71, 95%CI:0.57-0.89) compared to married participants when adjusting for other variables. Conclusions.Kazakhstan is one of the countries with medium endemicity of HBV prevalence (2%-7%). Results of the study demonstrated the possibility to form a profile of risk factors (age, nationality, marital status). Taking into account the data, it is recommended to increase attention to donor questionnaires by adding leading questions and to improve preventive measures to prevent HBV. Funding. This research was supported by a grant from Abbott Laboratories.Keywords: anti-HBcore, blood donor, donation, hepatitis B virus, occult hepatitis
Procedia PDF Downloads 107266 Classification of Coughing and Breathing Activities Using Wearable and a Light-Weight DL Model
Authors: Subham Ghosh, Arnab Nandi
Abstract:
Background: The proliferation of Wireless Body Area Networks (WBAN) and Internet of Things (IoT) applications demonstrates the potential for continuous monitoring of physical changes in the body. These technologies are vital for health monitoring tasks, such as identifying coughing and breathing activities, which are necessary for disease diagnosis and management. Monitoring activities such as coughing and deep breathing can provide valuable insights into a variety of medical issues. Wearable radio-based antenna sensors, which are lightweight and easy to incorporate into clothing or portable goods, provide continuous monitoring. This mobility gives it a substantial advantage over stationary environmental sensors like as cameras and radar, which are constrained to certain places. Furthermore, using compressive techniques provides benefits such as reduced data transmission speeds and memory needs. These wearable sensors offer more advanced and diverse health monitoring capabilities. Methodology: This study analyzes the feasibility of using a semi-flexible antenna operating at 2.4 GHz (ISM band) and positioned around the neck and near the mouth to identify three activities: coughing, deep breathing, and idleness. Vector network analyzer (VNA) is used to collect time-varying complex reflection coefficient data from perturbed antenna nearfield. The reflection coefficient (S11) conveys nuanced information caused by simultaneous variations in the nearfield radiation of three activities across time. The signatures are sparsely represented with gaussian windowed Gabor spectrograms. The Gabor spectrogram is used as a sparse representation approach, which reassigns the ridges of the spectrogram images to improve their resolution and focus on essential components. The antenna is biocompatible in terms of specific absorption rate (SAR). The sparsely represented Gabor spectrogram pictures are fed into a lightweight deep learning (DL) model for feature extraction and classification. Two antenna locations are investigated in order to determine the most effective localization for three different activities. Findings: Cross-validation techniques were used on data from both locations. Due to the complex form of the recorded S11, separate analyzes and assessments were performed on the magnitude, phase, and their combination. The combination of magnitude and phase fared better than the separate analyses. Various sliding window sizes, ranging from 1 to 5 seconds, were tested to find the best window for activity classification. It was discovered that a neck-mounted design was effective at detecting the three unique behaviors.Keywords: activity recognition, antenna, deep-learning, time-frequency
Procedia PDF Downloads 8265 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture
Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán
Abstract:
Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing
Procedia PDF Downloads 93264 Avoidance of Brittle Fracture in Bridge Bearings: Brittle Fracture Tests and Initial Crack Size
Authors: Natalie Hoyer
Abstract:
Bridges in both roadway and railway systems depend on bearings to ensure extended service life and functionality. These bearings enable proper load distribution from the superstructure to the substructure while permitting controlled movement of the superstructure. The design of bridge bearings, according to Eurocode DIN EN 1337 and the relevant sections of DIN EN 1993, increasingly requires the use of thick plates, especially for long-span bridges. However, these plate thicknesses exceed the limits specified in the national appendix of DIN EN 1993-2. Furthermore, compliance with DIN EN 1993-1-10 regulations regarding material toughness and through-thickness properties necessitates further modifications. Consequently, these standards cannot be directly applied to the selection of bearing materials without supplementary guidance and design rules. In this context, a recommendation was developed in 2011 to regulate the selection of appropriate steel grades for bearing components. Prior to the initiation of the research project underlying this contribution, this recommendation had only been available as a technical bulletin. Since July 2023, it has been integrated into guideline 804 of the German railway. However, recent findings indicate that certain bridge-bearing components are exposed to high fatigue loads, which necessitate consideration in structural design, material selection, and calculations. Therefore, the German Centre for Rail Traffic Research called a research project with the objective of defining a proposal to expand the current standards in order to implement a sufficient choice of steel material for bridge bearings to avoid brittle fracture, even for thick plates and components subjected to specific fatigue loads. The results obtained from theoretical considerations, such as finite element simulations and analytical calculations, are validated through large-scale component tests. Additionally, experimental observations are used to calibrate the calculation models and modify the input parameters of the design concept. Within the large-scale component tests, a brittle failure is artificially induced in a bearing component. For this purpose, an artificially generated initial defect is introduced at the previously defined hotspot into the specimen using spark erosion. Then, a dynamic load is applied until the crack initiation process occurs to achieve realistic conditions in the form of a sharp notch similar to a fatigue crack. This initiation process continues until the crack length reaches a predetermined size. Afterward, the actual test begins, which requires cooling the specimen with liquid nitrogen until a temperature is reached where brittle fracture failure is expected. In the next step, the component is subjected to a quasi-static tensile test until failure occurs in the form of a brittle failure. The proposed paper will present the latest research findings, including the results of the conducted component tests and the derived definition of the initial crack size in bridge bearings.Keywords: bridge bearings, brittle fracture, fatigue, initial crack size, large-scale tests
Procedia PDF Downloads 44263 Design, Fabrication and Analysis of Molded and Direct 3D-Printed Soft Pneumatic Actuators
Authors: N. Naz, A. D. Domenico, M. N. Huda
Abstract:
Soft Robotics is a rapidly growing multidisciplinary field where robots are fabricated using highly deformable materials motivated by bioinspired designs. The high dexterity and adaptability to the external environments during contact make soft robots ideal for applications such as gripping delicate objects, locomotion, and biomedical devices. The actuation system of soft robots mainly includes fluidic, tendon-driven, and smart material actuation. Among them, Soft Pneumatic Actuator, also known as SPA, remains the most popular choice due to its flexibility, safety, easy implementation, and cost-effectiveness. However, at present, most of the fabrication of SPA is still based on traditional molding and casting techniques where the mold is 3d printed into which silicone rubber is cast and consolidated. This conventional method is time-consuming and involves intensive manual labour with the limitation of repeatability and accuracy in design. Recent advancements in direct 3d printing of different soft materials can significantly reduce the repetitive manual task with an ability to fabricate complex geometries and multicomponent designs in a single manufacturing step. The aim of this research work is to design and analyse the Soft Pneumatic Actuator (SPA) utilizing both conventional casting and modern direct 3d printing technologies. The mold of the SPA for traditional casting is 3d printed using fused deposition modeling (FDM) with the polylactic acid (PLA) thermoplastic wire. Hyperelastic soft materials such as Ecoflex-0030/0050 are cast into the mold and consolidated using a lab oven. The bending behaviour is observed experimentally with different pressures of air compressor to ensure uniform bending without any failure. For direct 3D-printing of SPA fused deposition modeling (FDM) with thermoplastic polyurethane (TPU) and stereolithography (SLA) with an elastic resin are used. The actuator is modeled using the finite element method (FEM) to analyse the nonlinear bending behaviour, stress concentration and strain distribution of different hyperelastic materials after pressurization. FEM analysis is carried out using Ansys Workbench software with a Yeon-2nd order hyperelastic material model. FEM includes long-shape deformation, contact between surfaces, and gravity influences. For mesh generation, quadratic tetrahedron, hybrid, and constant pressure mesh are used. SPA is connected to a baseplate that is in connection with the air compressor. A fixed boundary is applied on the baseplate, and static pressure is applied orthogonally to all surfaces of the internal chambers and channels with a closed continuum model. The simulated results from FEM are compared with the experimental results. The experiments are performed in a laboratory set-up where the developed SPA is connected to a compressed air source with a pressure gauge. A comparison study based on performance analysis is done between FDM and SLA printed SPA with the molded counterparts. Furthermore, the molded and 3d printed SPA has been used to develop a three-finger soft pneumatic gripper and has been tested for handling delicate objects.Keywords: finite element method, fused deposition modeling, hyperelastic, soft pneumatic actuator
Procedia PDF Downloads 90262 Towards a Strategic Framework for State-Level Epistemological Functions
Authors: Mark Darius Juszczak
Abstract:
While epistemology, as a sub-field of philosophy, is generally concerned with theoretical questions about the nature of knowledge, the explosion in digital media technologies has resulted in an exponential increase in the storage and transmission of human information. That increase has resulted in a particular non-linear dynamic – digital epistemological functions are radically altering how and what we know. Neither the rate of that change nor the consequences of it have been well studied or taken into account in developing state-level strategies for epistemological functions. At the current time, US Federal policy, like that of virtually all other countries, maintains, at the national state level, clearly defined boundaries between various epistemological agencies - agencies that, in one way or another, mediate the functional use of knowledge. These agencies can take the form of patent and trademark offices, national library and archive systems, departments of education, departments such as the FTC, university systems and regulations, military research systems such as DARPA, federal scientific research agencies, medical and pharmaceutical accreditation agencies, federal funding for scientific research and legislative committees and subcommittees that attempt to alter the laws that govern epistemological functions. All of these agencies are in the constant process of creating, analyzing, and regulating knowledge. Those processes are, at the most general level, epistemological functions – they act upon and define what knowledge is. At the same time, however, there are no high-level strategic epistemological directives or frameworks that define those functions. The only time in US history where a proxy state-level epistemological strategy existed was between 1961 and 1969 when the Kennedy Administration committed the United States to the Apollo program. While that program had a singular technical objective as its outcome, that objective was so technologically advanced for its day and so complex so that it required a massive redirection of state-level epistemological functions – in essence, a broad and diverse set of state-level agencies suddenly found themselves working together towards a common epistemological goal. This paper does not call for a repeat of the Apollo program. Rather, its purpose is to investigate the minimum structural requirements for a national state-level epistemological strategy in the United States. In addition, this paper also seeks to analyze how the epistemological work of the multitude of national agencies within the United States would be affected by such a high-level framework. This paper is an exploratory study of this type of framework. The primary hypothesis of the author is that such a function is possible but would require extensive re-framing and reclassification of traditional epistemological functions at the respective agency level. In much the same way that, for example, DHS (Department of Homeland Security) evolved to respond to a new type of security threat in the world for the United States, it is theorized that a lack of coordination and alignment in epistemological functions will equally result in a strategic threat to the United States.Keywords: strategic security, epistemological functions, epistemological agencies, Apollo program
Procedia PDF Downloads 77261 Managing the Blue Economy and Responding to the Environmental Dimensions of a Transnational Governance Challenge
Authors: Ivy Chen XQ
Abstract:
This research places a much-needed focus on the conservation of the Blue Economy (BE) by focusing on the design and development of monitoring systems to track critical indicators on the status of the BE. In this process, local experiences provide an insight into important community issues, as well as the necessity to cooperate and collaborate in order to achieve sustainable options. Researchers worldwide and industry initiatives over the last decade show that the exploitation of marine resources has resulted in a significant decrease in the share of total allowable catch (TAC). The result has been strengthening law enforcement, yet the results have shown that problems were related to poor policies, a lack of understanding of over-exploitation, biological uncertainty and political pressures. This reality and other statistics that show a significant negative impact on the attainment of the Sustainable Development Goals (SDGs), warrant an emphasis on the development of national M&E systems, in order to provide evidence-based information, on the nature and scale of especially transnational fisheries crime and under-sea marine resources in the BE. In particular, a need exists to establish a compendium of relevant BE indicators to assess such impact against the SDGs by using selected SDG indicators for this purpose. The research methodology consists of ATLAS.ti qualitative approach and a case study will be developed of Illegal, unregulated and unreported (IUU) poaching and Illegal Wildlife Trade (IWT) as component of the BE as it relates to the case of abalone in southern Africa and Far East. This research project will make an original contribution through the analysis and comparative assessment of available indicators, in the design process of M&E systems and developing indicators and monitoring frameworks in order to track critical trends and tendencies on the status of the BE, to ensure specific objectives to be aligned with the indicators of the SDGs framework. The research will provide a set of recommendations to governments and stakeholders involved in such projects on lessons learned, as well as priorities for future research. The research findings will enable scholars, civil society institutions, donors and public servants, to understand the capability of the M&E systems, the importance of showing multi-level governance, in the coordination of information management, together with knowledge management (KM) and M&E at the international, regional, national and local levels. This coordination should focus on a sustainable development management approach, based on addressing socio-economic challenges to the potential and sustainability of BE, with an emphasis on ecosystem resilience, social equity and resource efficiency. This research and study focus are timely as the opportunities of the post-Covid-19 crisis recovery package will be grasped to set the economy on a path to sustainable development in line with the UN 2030 Agenda. The pandemic raises more awareness for the world to eliminate IUU poaching and illegal wildlife trade (IWT).Keywords: Blue Economy (BE), transnational governance, Monitoring and Evaluation (M&E), Sustainable Development Goals (SDGs).
Procedia PDF Downloads 172260 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 126259 Poly(propylene fumarate) Copolymers with Phosphonic Acid-based Monomers Designed as Bone Tissue Engineering Scaffolds
Authors: Görkem Cemali̇, Avram Aruh, Gamze Torun Köse, Erde Can ŞAfak
Abstract:
In order to heal bone disorders, the conventional methods which involve the use of autologous and allogenous bone grafts or permanent implants have certain disadvantages such as limited supply, disease transmission, or adverse immune response. A biodegradable material that acts as structural support to the damaged bone area and serves as a scaffold that enhances bone regeneration and guides bone formation is one desirable solution. Poly(propylene fumarate) (PPF) which is an unsaturated polyester that can be copolymerized with appropriate vinyl monomers to give biodegradable network structures, is a promising candidate polymer to prepare bone tissue engineering scaffolds. In this study, hydroxyl-terminated PPF was synthesized and thermally cured with vinyl phosphonic acid (VPA) and diethyl vinyl phosphonate (VPES) in the presence of radical initiator benzoyl peroxide (BP), with changing co-monomer weight ratios (10-40wt%). In addition, the synthesized PPF was cured with VPES comonomer at body temperature (37oC) in the presence of BP initiator, N, N-Dimethyl para-toluidine catalyst and varying amounts of Beta-tricalcium phosphate (0-20 wt% ß-TCP) as filler via radical polymerization to prepare composite materials that can be used in injectable forms. Thermomechanical properties, compressive properties, hydrophilicity and biodegradability of the PPF/VPA and PPF/VPES copolymers were determined and analyzed with respect to the copolymer composition. Biocompatibility of the resulting polymers and their composites was determined by the MTS assay and osteoblast activity was explored with von kossa, alkaline phosphatase and osteocalcin activity analysis and the effects of VPA and VPES comonomer composition on these properties were investigated. Thermally cured PPF/VPA and PPF/VPES copolymers with different compositions exhibited compressive modulus and strength values in the wide range of 10–836 MPa and 14–119 MPa, respectively. MTS assay studies showed that the majority of the tested compositions were biocompatible and the overall results indicated that PPF/VPA and PPF/VPES network polymers show significant potential for applications as bone tissue engineering scaffolds where varying PPF and co-monomer ratio provides adjustable and controllable properties of the end product. The body temperature cured PPF/VPES/ß-TCP composites exhibited significantly lower compressive modulus and strength values than the thermal cured PPF/VPES copolymers and were therefore found to be useful as scaffolds for cartilage tissue engineering applications.Keywords: biodegradable, bone tissue, copolymer, poly(propylene fumarate), scaffold
Procedia PDF Downloads 166258 Horizontal Stress Magnitudes Using Poroelastic Model in Upper Assam Basin, India
Authors: Jenifer Alam, Rima Chatterjee
Abstract:
Upper Assam sedimentary basin is one of the oldest commercially producing basins of India. Being in a tectonically active zone, estimation of tectonic strain and stress magnitudes has vast application in hydrocarbon exploration and exploitation. This East North East –West South West trending shelf-slope basin encompasses the Bramhaputra valley extending from Mikir Hills in the southwest to the Naga foothills in the northeast. Assam Shelf lying between the Main Boundary Thrust (MBT) and Naga Thrust area is comparatively free from thrust tectonics and depicts normal faulting mechanism. The study area is bounded by the MBT and Main Central Thrust in the northwest. The Belt of Schuppen in the southeast, is bordered by Naga and Disang thrust marking the lower limit of the study area. The entire Assam basin shows low-level seismicity compared to other regions of northeast India. Pore pressure (PP), vertical stress magnitude (SV) and horizontal stress magnitudes have been estimated from two wells - N1 and T1 located in Upper Assam. N1 is located in the Assam gap below the Bramhaputra river while T1, lies in the Belt of Schuppen. N1 penetrates geological formations from top Alluvial through Dhekiajuli, Girujan, Tipam, Barail, Kopili, Sylhet and Langpur to the granitic basement while T1 in trusted zone crosses through Girujan Suprathrust, Tipam Suprathrust, Barail Suprathrust to reach Naga Thrust. Normal compaction trend is drawn through shale points through both wells for estimation of PP using the conventional Eaton sonic equation with an exponent of 1.0 which is validated with Modular Dynamic Tester and mud weight. Observed pore pressure gradient ranges from 10.3 MPa/km to 11.1 MPa/km. The SV has a gradient from 22.20 to 23.80 MPa/km. Minimum and maximum horizontal principal stress (Sh and SH) magnitudes under isotropic conditions are determined using poroelastic model. This approach determines biaxial tectonic strain utilizing static Young’s Modulus, Poisson’s Ratio, SV, PP, leak off test (LOT) and SH derived from breakouts using prior information on unconfined compressive strength. Breakout derived SH information is used for obtaining tectonic strain due to lack of measured SH data from minifrac or hydrofracturing. Tectonic strain varies from 0.00055 to 0.00096 along x direction and from -0.0010 to 0.00042 along y direction. After obtaining tectonic strains at each well, the principal horizontal stress magnitudes are calculated from linear poroelastic model. The magnitude of Sh and SH gradient in normal faulting region are 12.5 and 16.0 MPa/km while in thrust faulted region the gradients are 17.4 and 20.2 MPa/km respectively. Model predicted Sh and SH matches well with the LOT data and breakout derived SH data in both wells. It is observed from this study that the stresses SV>SH>Sh prevailing in the shelf region while near the Naga foothills the regime changes to SH≈SV>Sh area corresponds to normal faulting regime. Hence this model is a reliable tool for predicting stress magnitudes from well logs under active tectonic regime in Upper Assam Basin.Keywords: Eaton, strain, stress, poroelastic model
Procedia PDF Downloads 214257 Development and Characterization of Topical 5-Fluorouracil Solid Lipid Nanoparticles for the Effective Treatment of Non-Melanoma Skin Cancer
Authors: Sudhir Kumar, V. R. Sinha
Abstract:
Background: The topical and systemic toxicity associated with present nonmelanoma skin cancer (NMSC) treatment therapy using 5-Fluorouracil (5-FU) make it necessary to develop a novel delivery system having lesser toxicity and better control over drug release. Solid lipid nanoparticles offer many advantages like: controlled and localized release of entrapped actives, nontoxicity, and better tolerance. Aim:-To investigate safety and efficacy of 5-FU loaded solid lipid nanoparticles as a topical delivery system for the treatment of nonmelanoma skin cancer. Method: Topical solid lipid nanoparticles of 5-FU were prepared using Compritol 888 ATO (Glyceryl behenate) as lipid component and pluronic F68 (Poloxamer 188), Tween 80 (Polysorbate 80), Tyloxapol (4-(1,1,3,3-Tetramethylbutyl) phenol polymer with formaldehyde and oxirane) as surfactants. The SLNs were prepared with emulsification method. Different formulation parameters viz. type and ratio of surfactant, ratio of lipid and ratio of surfactant:lipid were investigated on particle size and drug entrapment efficiency. Results: Characterization of SLNs like–Transmission Electron Microscopy (TEM), Differential Scannig calorimetry (DSC), Fourier transform infrared spectroscopy (FTIR), Particle size determination, Polydispersity index, Entrapment efficiency, Drug loading, ex vivo skin permeation and skin retention studies, skin irritation and histopathology studies were performed. TEM results showed that shape of SLNs was spherical with size range 200-500nm. Higher encapsulation efficiency was obtained for batches having higher concentration of surfactant and lipid. It was found maximum 64.3% for SLN-6 batch with size of 400.1±9.22 nm and PDI 0.221±0.031. Optimized SLN batches and marketed 5-FU cream were compared for flux across rat skin and skin drug retention. The lesser flux and higher skin retention was obtained for SLN formulation in comparison to topical 5-FU cream, which ensures less systemic toxicity and better control of drug release across skin. Chronic skin irritation studies lacks serious erythema or inflammation and histopathology studies showed no significant change in physiology of epidermal layers of rat skin. So, these studies suggest that the optimized SLN formulation is efficient then marketed cream and safer for long term NMSC treatment regimens. Conclusion: Topical and systemic toxicity associated with long-term use of 5-FU, in the treatment of NMSC, can be minimized with its controlled release with significant drug retention with minimal flux across skin. The study may provide a better alternate for effective NMSC treatment.Keywords: 5-FU, topical formulation, solid lipid nanoparticles, non melanoma skin cancer
Procedia PDF Downloads 516256 The Prevalence and Profile of Extended Spectrum B-Lactamase (ESBL) Producing Enterobacteriaceae Species in the Intensive Care Unit (ICU) Setting of a Tertiary Care Hospital of North India
Authors: Harmeet Pal Singh Dhooria, Deepinder Chinna, UPS Sidhu, Alok Jain
Abstract:
Serious infections caused by gram-negative bacteria are a significant cause of mortality and morbidity in the hospital setting. In acute care facilities like in intensive care units (ICUs), the intensity of antimicrobial use together with a population highly susceptible to infection, creates an environment, which facilitates both emergence and transmission of Extended Spectrum -lactamase (ESBL) producing Enterobacteriaceae species. The study was conducted in the Medical Intensive Care Unit (MICU) and the Pulmonary Critical Care Unit (PCCU) of the Department of Medicine, Dayanand Medical College and Hospital, Ludhiana, Punjab, India. Out of a total of 1108 samples of urine, blood and respiratory tract secretions received for culture and sensitivity analysis from Medical Intensive Care Unit and Pulmonary Critical Care Unit, a total of 170 isolates of Enterobacteriaceae species were obtained which were then included in our study. Out of these 170 isolates, confirmed ESBL production was seen in 116 (68.24%) cases. E.coli was the most common species isolated (56.47%) followed by Klebsiella (32.94%), Enterobacter (5.88%), Citrobacter (3.53%), Enterobacter (0.59%) and Morganella (0.59%) among the total isolates. The rate of ESBL production was more in Klebsiella (78.57%) as compared to E.coli (60.42%). ESBL producers were found to be significantly more common in patients with prior history of hospitalization, antibiotic use, and prolonged ICU stay. Also significantly increased the prevalence of ESBL related infections was observed in patients with a history of catheterization or central line insertion but not in patients with the history of intubation. Patients who had an underlying malignancy had significantly higher prevalence of ESBL related infections as compared to other co-morbid illnesses. A slightly significant difference in the rate of mortality/LAMA was observed in the ESBL producer versus the non-ESBL producer group. The rate of mortality/LAMA was significantly higher in the ESBL related UTI but not in the ESBL related respiratory tract and bloodstream infections. ESBL producing isolates had significantly higher rates of resistance to Cefepime and Piperacillin/Tazobactum, and to non β-lactum antibiotics like Amikacin and Ciprofloxacin. The level of resistance to Imipenem was lower as compared to other antibiotics. However, it was noted that ESBL producing isolates had higher levels of resistance to Imipenem as compared to non-ESBL producing isolates. Conclusion- The prevalence of ESBL producing organisms was found to be very high (68.24%) among Enterobacteriaceae isolates in our ICU setting as among other ICU care settings around the world.Keywords: enterobacteriaceae, extended spectrum B-lactamase (ESBL), ICU, antibiotic resistance
Procedia PDF Downloads 276255 Cultural Cognition and Voting: Understanding Values and Perceived Risks in the Colombian Population
Authors: Andrea N. Alarcon, Julian D. Castro, Gloria C. Rojas, Paola A. Vaca, Santiago Ortiz, Gustavo Martinez, Pablo D. Lemoine
Abstract:
Recently, electoral results across many countries have shown to be inconsistent with rational decision theory, which states that individuals make decisions based on maximizing benefits and reducing risks. An alternative explanation has emerged: Fear and rage-driven vote have been proved to be highly effective for political persuasion and mobilization. This phenomenon has been evident in the 2016 elections in the United States, 2006 elections in Mexico, 1998 elections in Venezuela, and 2004 elections in Bolivia. In Colombia, it has occurred recently in the 2016 plebiscite for peace and 2018 presidential elections. The aim of this study is to explain this phenomenon using cultural cognition theory, referring to the psychological predisposition individuals have to believe that its own and its peer´s behavior is correct and, therefore, beneficial to the entire society. Cultural cognition refers to the tendency of individuals to fit perceived risks, and factual beliefs into group shared values; the Cultural Cognition Worldview Scales (CCWS) measures cultural perceptions through two different dimensions: Individualism-communitarianism and hierarchy-egalitarianism. The former refers to attitudes towards social dominance based on conspicuous and static characteristics (sex, ethnicity or social class), while the latter refers to attitudes towards a social ordering in which it is expected from individuals to guarantee their own wellbeing without society´s or government´s intervention. A probabilistic national sample was obtained from different polls from the consulting and public opinion company Centro Nacional de Consultoría. Sociodemographic data was obtained along with CCWS scores, a subjective measure of left-right ideological placement and vote intention for 2019 Mayor´s elections were also included in the questionnaires. Finally, the question “In your opinion, what is the greatest risk Colombia is facing right now?” was included to identify perceived risk in the population. Preliminary results show that Colombians are highly distributed among hierarchical communitarians and egalitarian individualists (30.9% and 31.7%, respectively), and to a less extent among hierarchical individualists and egalitarian communitarians (19% and 18.4%, respectively). Males tended to be more hierarchical (p < .000) and communitarian (p=.009) than females. ANOVA´s revealed statistically significant differences between groups (quadrants) for the level of schooling, left-right ideological orientation, and stratum (p < .000 for all), and proportion differences revealed statistically significant differences for groups of age (p < .001). Differences and distributions for vote intention and perceived risks are still being processed and results are yet to be analyzed. Results show that Colombians are differentially distributed among quadrants in regard to sociodemographic data and left-right ideological orientation. These preliminary results indicate that this study may shed some light on why Colombians vote the way they do, and future qualitative data will show the fears emerging from the identified values in the CCWS and the relation this has with vote intention.Keywords: communitarianism, cultural cognition, egalitarianism, hierarchy, individualism, perceived risks
Procedia PDF Downloads 148254 Global Experiences in Dealing with Biological Epidemics with an Emphasis on COVID-19 Disease: Approaches and Strategies
Authors: Marziye Hadian, Alireza Jabbari
Abstract:
Background: The World Health Organization has identified COVID-19 as a public health emergency and is urging governments to stop the virus transmission by adopting appropriate policies. In this regard, authorities have taken different approaches to cut the chain or controlling the spread of the disease. Now, the questions we are facing include what these approaches are? What tools should be used to implement each preventive protocol? In addition, what is the impact of each approach? Objective: The aim of this study was to determine the approaches to biological epidemics and related prevention tools with an emphasis on COVID-19 disease. Data sources: Databases including ISI web of science, PubMed, Scopus, Science Direct, Ovid, and ProQuest were employed for data extraction. Furthermore, authentic sources such as the WHO website, the published reports of relevant countries, as well as the Worldometer website were evaluated for gray studies. The time-frame of the study was from 1 December 2019 to 30 May 2020. Methods: The present study was a systematic study of publications related to the prevention strategies for the COVID-19 disease. The study was carried out based on the PRISMA guidelines and CASP for articles and AACODS for grey literature. Results: The study findings showed that in order to confront the COVID-19 epidemic, in general, there are three approaches of "mitigation", "active control" and "suppression" and four strategies of "quarantine", "isolation", "social distance" and "lockdown" in both individual and social dimensions to deal with epidemics. Selection and implementation of each approach requires specific strategies and has different effects when it comes to controlling and inhibiting the disease. Key finding: One possible approach to control the disease is to change individual behavior and lifestyle. In addition to prevention strategies, use of masks, observance of personal hygiene principles such as regular hand washing and non-contact of contaminated hands with the face, as well as an observance of public health principles such as sneezing and coughing etiquettes, safe extermination of personal protective equipment, must be strictly observed. Have not been included in the category of prevention tools. However, it has a great impact on controlling the epidemic, especially the new coronavirus epidemic. Conclusion: Although the use of different approaches to control and inhibit biological epidemics depends on numerous variables, however, despite these requirements, global experience suggests that some of these approaches are ineffective. The use of previous experiences in the world, along with the current experiences of countries, can be very helpful in choosing the accurate approach for each country in accordance with the characteristics of that country and lead to the reduction of possible costs at the national and international levels.Keywords: novel corona virus, COVID-19, approaches, prevention tools, prevention strategies
Procedia PDF Downloads 126253 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes
Authors: Mohsen Hababalahi, Morteza Bastami
Abstract:
Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method
Procedia PDF Downloads 513252 Cost Efficient Receiver Tube Technology for Eco-Friendly Concentrated Solar Thermal Applications
Authors: M. Shiva Prasad, S. R. Atchuta, T. Vijayaraghavan, S. Sakthivel
Abstract:
The world is in need of efficient energy conversion technologies which are affordable, accessible, and sustainable with eco-friendly nature. Solar energy is one of the cornerstones for the world’s economic growth because of its abundancy with zero carbon pollution. Among the various solar energy conversion technologies, solar thermal technology has attracted a substantial renewed interest due to its diversity and compatibility in various applications. Solar thermal systems employ concentrators, tracking systems and heat engines for electricity generation which lead to high cost and complexity in comparison with photovoltaics; however, it is compatible with distinct thermal energy storage capability and dispatchable electricity which creates a tremendous attraction. Apart from that, employing cost-effective solar selective receiver tube in a concentrating solar thermal (CST) system improves the energy conversion efficiency and directly reduces the cost of technology. In addition, the development of solar receiver tubes by low cost methods which can offer high optical properties and corrosion resistance in an open-air atmosphere would be beneficial for low and medium temperature applications. In this regard, our work opens up an approach which has the potential to achieve cost-effective energy conversion. We have developed a highly selective tandem absorber coating through a facile wet chemical route by a combination of chemical oxidation, sol-gel, and nanoparticle coating methods. The developed tandem absorber coating has gradient refractive index nature on stainless steel (SS 304) and exhibited high optical properties (α ≤ 0.95 & ε ≤ 0.14). The first absorber layer (Cr-Mn-Fe oxides) developed by controlled oxidation of SS 304 in a chemical bath reactor. A second composite layer of ZrO2-SiO2 has been applied on the chemically oxidized substrate by So-gel dip coating method to serve as optical enhancing and corrosion resistant layer. Finally, an antireflective layer (MgF2) has been deposited on the second layer, to achieve > 95% of absorption. The developed tandem layer exhibited good thermal stability up to 250 °C in open air atmospheric condition and superior corrosion resistance (withstands for > 200h in salt spray test (ASTM B117)). After the successful development of a coating with targeted properties at a laboratory scale, a prototype of the 1 m tube has been demonstrated with excellent uniformity and reproducibility. Moreover, it has been validated under standard laboratory test condition as well as in field condition with a comparison of the commercial receiver tube. The presented strategy can be widely adapted to develop highly selective coatings for a variety of CST applications ranging from hot water, solar desalination, and industrial process heat and power generation. The high-performance, cost-effective medium temperature receiver tube technology has attracted many industries, and recently the technology has been transferred to Indian industry.Keywords: concentrated solar thermal system, solar selective coating, tandem absorber, ultralow refractive index
Procedia PDF Downloads 89251 „Real and Symbolic in Poetics of Multiplied Screens and Images“
Authors: Kristina Horvat Blazinovic
Abstract:
In the context of a work of art, one can talk about the idea-concept-term-intention expressed by the artist by using various forms of repetition (external, material, visible repetition). Such repetitions of elements (images in space or moving visual and sound images in time) suggest a "covert", "latent" ("dressed") repetition – i.e., "hidden", "latent" term-intention-idea. Repeating in this way reveals a "deeper truth" that the viewer needs to decode and which is hidden "under" the technical manifestation of the multiplied images. It is not only images, sounds, and screens that are repeated - something else is repeated through them as well, even if, in some cases, the very idea of repetition is repeated. This paper examines serial images and single-channel or multi-channel artwork in the field of video/film art and video installations, which in a way implies the concept of repetition and multiplication. Moving or static images and screens (as multi-screens) are repeated in time and space. The categories of the real and the symbolic partly refer to the Lacan registers of reality, i.e., the Imaginary - Symbolic – Real trinity that represents the orders within which human subjectivity is established. Authors such as Bruce Nauman, VALIE EXPORT, Ragnar Kjartansson, Wolf Vostell, Shirin Neshat, Paul Sharits, Harun Farocki, Dalibor Martinis, Andy Warhol, Douglas Gordon, Bill Viola, Frank Gillette, and Ira Schneider, and Marina Abramovic problematize, in different ways, the concept and procedures of multiplication - repetition, but not in the sense of "copying" and "repetition" of reality or the original, but of repeated repetitions of the simulacrum. Referential works of art are often connected by the theme of the traumatic. Repetitions of images and situations are a response to the traumatic (experience) - repetition itself is a symptom of trauma. On the other hand, repeating and multiplying traumatic images results in a new traumatic effect or cancels it. Reflections on repetition as a temporal and spatial phenomenon are in line with the chapters that link philosophical considerations of space and time and experience temporality with their manifestation in works of art. The observations about time and the relation of perception and memory are according to Henry Bergson and his conception of duration (durée) as "quality of quantity." The video works intended to be displayed as a video loop, express the idea of infinite duration ("pure time," according to Bergson). The Loop wants to be always present - to fixate in time. Wholeness is unrecognizable because the intention is to make the effect infinitely cyclic. Reflections on time and space end with considerations about the occurrence and effects of time and space intervals as places and moments "between" – the points of connection and separation, of continuity and stopping - by reference to the "interval theory" of Soviet filmmaker DzigaVertov. The scale of opportunities that can be explored in interval mode is wide. Intervals represent the perception of time and space in the form of pauses, interruptions, breaks (e.g., emotional, dramatic, or rhythmic) denote emptiness or silence, distance, proximity, interstitial space, or a gap between various states.Keywords: video installation, performance, repetition, multi-screen, real and symbolic, loop, video art, interval, video time
Procedia PDF Downloads 173250 PolyScan: Comprehending Human Polymicrobial Infections for Vector-Borne Disease Diagnostic Purposes
Authors: Kunal Garg, Louise Theusen Hermansan, Kanoktip Puttaraska, Oliver Hendricks, Heidi Pirttinen, Leona Gilbert
Abstract:
The Germ Theory (one infectious determinant is equal to one disease) has unarguably evolved our capability to diagnose and treat infectious diseases over the years. Nevertheless, the advent of technology, climate change, and volatile human behavior has brought about drastic changes in our environment, leading us to question the relevance of the Germ Theory in our day, i.e. will vector-borne disease (VBD) sufferers produce multiple immune responses when tested for multiple microbes? Vector diseased patients producing multiple immune responses to different microbes would evidently suggest human polymicrobial infections (HPI). Ongoing diagnostic tools are exceedingly unequipped with the current research findings that would aid in diagnosing patients for polymicrobial infections. This shortcoming has caused misdiagnosis at very high rates, consequently diminishing the patient’s quality of life due to inadequate treatment. Equipped with the state-of-art scientific knowledge, PolyScan intends to address the pitfalls in current VBD diagnostics. PolyScan is a multiplex and multifunctional enzyme linked Immunosorbent assay (ELISA) platform that can test for numerous VBD microbes and allow simultaneous screening for multiple types of antibodies. To validate PolyScan, Lyme Borreliosis (LB) and spondyloarthritis (SpA) patient groups (n = 54 each) were tested for Borrelia burgdorferi, Borrelia burgdorferi Round Body (RB), Borrelia afzelii, Borrelia garinii, and Ehrlichia chaffeensis against IgM and IgG antibodies. LB serum samples were obtained from Germany and SpA serum samples were obtained from Denmark under relevant ethical approvals. The SpA group represented chronic LB stage because reactive arthritis (SpA subtype) in the form of Lyme arthritis links to LB. It was hypothesized that patients from both the groups will produce multiple immune responses that as a consequence would evidently suggest HPI. It was also hypothesized that the multiple immune response proportion in SpA patient group would be significantly larger when compared to the LB patient group across both antibodies. It was observed that 26% LB patients and 57% SpA patients produced multiple immune responses in contrast to 33% LB patients and 30% SpA patients that produced solitary immune responses when tested against IgM. Similarly, 52% LB patients and an astounding 73% SpA patients produced multiple immune responses in contrast to 30% LB patients and 8% SpA patients that produced solitary immune responses when tested against IgG. Interestingly, IgM immune dysfunction in both the patient groups was also recorded. Atypically, 6% of the unresponsive 18% LB with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Similarly, 12% of the unresponsive 19% SpA with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Thus, results not only supported hypothesis but also suggested that IgM may atypically prevail longer than IgG. The PolyScan concept will aid clinicians to detect patients for early, persistent, late, polymicrobial, & immune dysfunction conditions linked to different VBD. PolyScan provides a paradigm shift for the VBD diagnostic industry to follow that will drastically shorten patient’s time to receive adequate treatment.Keywords: diagnostics, immune dysfunction, polymicrobial, TICK-TAG
Procedia PDF Downloads 327249 Phytochemical and Antimicrobial Properties of Zinc Oxide Nanocomposites on Multidrug-Resistant E. coli Enzyme: In-vitro and in-silico Studies
Authors: Callistus I. Iheme, Kenneth E. Asika, Emmanuel I. Ugwor, Chukwuka U. Ogbonna, Ugonna H. Uzoka, Nneamaka A. Chiegboka, Chinwe S. Alisi, Obinna S. Nwabueze, Amanda U. Ezirim, Judeanthony N. Ogbulie
Abstract:
Antimicrobial resistance (AMR) is a major threat to the global health sector. Zinc oxide nanocomposites (ZnONCs), composed of zinc oxide nanoparticles and phytochemicals from Azadirachta indica aqueous leaf extract, were assessed for their physico-chemicals, in silico and in vitro antimicrobial properties on multidrug-resistant Escherichia coli enzymes. Gas chromatography coupled with mass spectroscope (GC-MS) analysis on the ZnONCs revealed the presence of twenty volatile phytochemical compounds, among which is scoparone. Characterization of the ZnONCs was done using ultraviolet-visible spectroscopy (UV-vis), energy dispersive spectroscopy (EDX), transmission electron microscopy (TEM), scanning electron microscopy (SEM), and x-ray diffractometer (XRD). Dehydrogenase enzyme converts colorless 2,3,5-triphenyltetrazolium chloride to the red triphenyl formazan (TPF). The rate of formazan formation in the presence of ZnONCs is proportional to the enzyme activities. The color formation is extracted and determined at 500 nm, and the percentage of enzyme activity is calculated. To determine the bioactive components of the ZnONCs, characterize their binding to enzymes, and evaluate the enzyme-ligand complex stability, respectively Discrete Fourier Transform (DFT) analysis, docking, and molecular dynamics simulations will be employed. The results showed arrays of ZnONCs nanorods with maximal absorption wavelengths of 320 nm and 350 nm and thermally stable at the temperature range of 423.77 to 889.69 ℃. In vitro study assessed the dehydrogenase inhibitory properties of the ZnONCs, conjugate of ZnONCs and ampicillin (ZnONCs-amp), the aqueous leaf extract of A. indica, and ampicillin (standard drug). The findings revealed that at the concentration of 500 μm/mL, 57.89 % of the enzyme activities were inhibited by ZnONCs compared to 33.33% and 21.05% of the standard drug (Ampicillin), and the aqueous leaf extract of the A. indica respectively. The inhibition of the enzyme activities by the ZnONCs at 500 μm/mL was further enhanced to 89.74 % by conjugating with Ampicillin. In silico study on the ZnONCs revealed scoparone as the most viable competitor of nicotinamide adenine dinucleotide (NAD⁺) for the coenzyme binding pocket on E. coli malate and histidinol dehydrogenase. From the findings, it can be concluded that the scoparone components of the nanocomposites in synergy with the zinc oxide nanoparticles inhibited E. coli malate and histidinol dehydrogenase by competitively binding to the NAD⁺ pocket and that the conjugation of the ZnONCs with ampicillin further enhanced the antimicrobial efficiency of the nanocomposite against multidrug resistant E. coli.Keywords: antimicrobial resistance, dehydrogenase activities, E. coli, zinc oxide nanocomposites
Procedia PDF Downloads 49248 Sol-Gel Derived Yttria-Stabilized Zirconia Nanoparticles for Dental Applications: Synthesis and Characterization
Authors: Anastasia Beketova, Emmanouil-George C. Tzanakakis, Ioannis G. Tzoutzas, Eleana Kontonasaki
Abstract:
In restorative dentistry, yttria-stabilized zirconia (YSZ) nanoparticles can be applied as fillers to improve the mechanical properties of various resin-based materials. Using sol-gel based synthesis as simple and cost-effective method, nano-sized YSZ particles with high purity can be produced. The aim of this study was to synthesize YSZ nanoparticles by the Pechini sol-gel method at different temperatures and to investigate their composition, structure, and morphology. YSZ nanopowders were synthesized by the sol-gel method using zirconium oxychloride octahydrate (ZrOCl₂.8H₂O) and yttrium nitrate hexahydrate (Y(NO₃)₃.6H₂O) as precursors with the addition of acid chelating agents to control hydrolysis and gelation reactions. The obtained powders underwent TG_DTA analysis and were sintered at three different temperatures: 800, 1000, and 1200°C for 2 hours. Their composition and morphology were investigated by Fourier Transform Infrared Spectroscopy (FTIR), X-Ray Diffraction Analysis (XRD), Scanning Electron Microscopy with associated with Energy Dispersive X-ray analyzer (SEM-EDX), Transmission Electron Microscopy (TEM) methods, and Dynamic Light Scattering (DLS). FTIR and XRD analysis showed the presence of pure tetragonal phase in the composition of nanopowders. By increasing the calcination temperature, the crystallinity of materials increased, reaching 47.2 nm for the YSZ1200 specimens. SEM analysis at high magnifications and DLS analysis showed submicron-sized particles with good dispersion and low agglomeration, which increased in size as the sintering temperature was elevated. From the TEM images of the YSZ1000 specimen, it can be seen that zirconia nanoparticles are uniform in size and shape and attain an average particle size of about 50 nm. The electron diffraction patterns clearly revealed ring patterns of polycrystalline tetragonal zirconia phase. Pure YSZ nanopowders have been successfully synthesized by the sol-gel method at different temperatures. Their size is small, and uniform, allowing their incorporation of dental luting resin cements to improve their mechanical properties and possibly enhance the bond strength of demanding dental ceramics such as zirconia to the tooth structure. This research is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme 'Human Resources Development, Education and Lifelong Learning 2014- 2020' in the context of the project 'Development of zirconia adhesion cements with stabilized zirconia nanoparticles: physicochemical properties and bond strength under aging conditions' (MIS 5047876).Keywords: dental cements, nanoparticles, sol-gel, yttria-stabilized zirconia, YSZ
Procedia PDF Downloads 147247 Force Sensor for Robotic Graspers in Minimally Invasive Surgery
Authors: Naghmeh M. Bandari, Javad Dargahi, Muthukumaran Packirisamy
Abstract:
Robot-assisted minimally invasive surgery (RMIS) has been widely performed around the world during the last two decades. RMIS demonstrates significant advantages over conventional surgery, e.g., improving the accuracy and dexterity of a surgeon, providing 3D vision, motion scaling, hand-eye coordination, decreasing tremor, and reducing x-ray exposure for surgeons. Despite benefits, surgeons cannot touch the surgical site and perceive tactile information. This happens due to the remote control of robots. The literature survey identified the lack of force feedback as the riskiest limitation in the existing technology. Without the perception of tool-tissue contact force, the surgeon might apply an excessive force causing tissue laceration or insufficient force causing tissue slippage. The primary use of force sensors has been to measure the tool-tissue interaction force in real-time in-situ. Design of a tactile sensor is subjected to a set of design requirements, e.g., biocompatibility, electrical-passivity, MRI-compatibility, miniaturization, ability to measure static and dynamic force. In this study, a planar optical fiber-based sensor was proposed to mount at the surgical grasper. It was developed based on the light intensity modulation principle. The deflectable part of the sensor was a beam modeled as a cantilever Euler-Bernoulli beam on rigid substrates. A semi-cylindrical indenter was attached to the bottom surface the beam at the mid-span. An optical fiber was secured at both ends on the same rigid substrates. The indenter was in contact with the fiber. External force on the sensor caused deflection in the beam and optical fiber simultaneously. The micro-bending of the optical fiber would consequently result in light power loss. The sensor was simulated and studied using finite element methods. A laser light beam with 800nm wavelength and 5mW power was used as the input to the optical fiber. The output power was measured using a photodetector. The voltage from photodetector was calibrated to the external force for a chirp input (0.1-5Hz). The range, resolution, and hysteresis of the sensor were studied under monotonic and harmonic external forces of 0-2.0N with 0 and 5Hz, respectively. The results confirmed the validity of proposed sensing principle. Also, the sensor demonstrated an acceptable linearity (R2 > 0.9). A minimum external force was observed below which no power loss was detectable. It is postulated that this phenomenon is attributed to the critical angle of the optical fiber to observe total internal reflection. The experimental results were of negligible hysteresis (R2 > 0.9) and in fair agreement with the simulations. In conclusion, the suggested planar sensor is assessed to be a cost-effective solution, feasible, and easy to use the sensor for being miniaturized and integrated at the tip of robotic graspers. Geometrical and optical factors affecting the minimum sensible force and the working range of the sensor should be studied and optimized. This design is intrinsically scalable and meets all the design requirements. Therefore, it has a significant potential of industrialization and mass production.Keywords: force sensor, minimally invasive surgery, optical sensor, robotic surgery, tactile sensor
Procedia PDF Downloads 230246 Analysis of Resistance and Virulence Genes of Gram-Positive Bacteria Detected in Calf Colostrums
Authors: C. Miranda, S. Cunha, R. Soares, M. Maia, G. Igrejas, F. Silva, P. Poeta
Abstract:
The worldwide inappropriate use of antibiotics has increased the emergence of antimicrobial-resistant microorganisms isolated from animals, humans, food, and the environment. To combat this complex and multifaceted problem is essential to know the prevalence in livestock animals and possible ways of transmission among animals and between these and humans. Enterococci species, in particular E. faecalis and E. faecium, are the most common nosocomial bacteria, causing infections in animals and humans. Thus, the aim of this study was to characterize resistance and virulence factors genes among two enterococci species isolated from calf colostrums in Portuguese dairy farms. The 55 enterococci isolates (44 E. faecalis and 11 E. faecium) were tested for the presence of the resistance genes for the following antibiotics: erythromicyn (ermA, ermB, and ermC), tetracycline (tetL, tetM, tetK, and tetO), quinupristin/dalfopristin (vatD and vatE) and vancomycin (vanB). Of which, 25 isolates (15 E. faecalis and 10 E. faecium) were tested until now for 8 virulence factors genes (esp, ace, gelE, agg, cpd, cylA, cylB, and cylLL). The resistance and virulence genes were performed by PCR, using specific primers and conditions. Negative and positive controls were used in all PCR assays. All enterococci isolates showed resistance to erythromicyn and tetracycline through the presence of the genes: ermB (n=29, 53%), ermC (n=10, 18%), tetL (n=49, 89%), tetM (n=39, 71%) and tetK (n=33, 60%). Only two (4%) E. faecalis isolates showed the presence of tetO gene. No resistance genes for vancomycin were found. The virulence genes detected in both species were cpd (n=17, 68%), agg (n=16, 64%), ace (n=15, 60%), esp (n=13, 52%), gelE (n=13, 52%) and cylLL (n=8, 32%). In general, each isolate showed at least three virulence genes. In three E. faecalis isolates was not found virulence genes and only E. faecalis isolates showed virulence genes for cylA (n=4, 16%) and cylB (n=6, 24%). In conclusion, these colostrum samples that were consumed by calves demonstrated the presence of antibiotic-resistant enterococci harbored virulence genes. This genotypic characterization is crucial to control the antibiotic-resistant bacteria through the implementation of restricts measures safeguarding public health. Acknowledgements: This work was funded by the R&D Project CAREBIO2 (Comparative assessment of antimicrobial resistance in environmental biofilms through proteomics - towards innovative theragnostic biomarkers), with reference NORTE-01-0145-FEDER-030101 and PTDC/SAU-INF/30101/2017, financed by the European Regional Development Fund (ERDF) through the Northern Regional Operational Program (NORTE 2020) and the Foundation for Science and Technology (FCT). This work was supported by the Associate Laboratory for Green Chemistry - LAQV which is financed by national funds from FCT/MCTES (UIDB/50006/2020 and UIDP/50006/2020).Keywords: antimicrobial resistance, calf, colostrums, enterococci
Procedia PDF Downloads 197245 Ectopic Osteoinduction of Porous Composite Scaffolds Reinforced with Graphene Oxide and Hydroxyapatite Gradient Density
Authors: G. M. Vlasceanu, H. Iovu, E. Vasile, M. Ionita
Abstract:
Herein, the synthesis and characterization of chitosan-gelatin highly porous scaffold reinforced with graphene oxide, and hydroxyapatite (HAp), crosslinked with genipin was targeted. In tissue engineering, chitosan and gelatin are two of the most robust biopolymers with wide applicability due to intrinsic biocompatibility, biodegradability, low antigenicity properties, affordability, and ease of processing. HAp, per its exceptional activity in tuning cell-matrix interactions, is acknowledged for its capability of sustaining cellular proliferation by promoting bone-like native micro-media for cell adjustment. Genipin is regarded as a top class cross-linker, while graphene oxide (GO) is viewed as one of the most performant and versatile fillers. The composites with natural bone HAp/biopolymer ratio were obtained by cascading sonochemical treatments, followed by uncomplicated casting methods and by freeze-drying. Their structure was characterized by Fourier Transform Infrared Spectroscopy and X-ray Diffraction, while overall morphology was investigated by Scanning Electron Microscopy (SEM) and micro-Computer Tomography (µ-CT). Ensuing that, in vitro enzyme degradation was performed to detect the most promising compositions for the development of in vivo assays. Suitable GO dispersion was ascertained within the biopolymer mix as nanolayers specific signals lack in both FTIR and XRD spectra, and the specific spectral features of the polymers persisted with GO load enhancement. Overall, correlations between the GO induced material structuration, crystallinity variations, and chemical interaction of the compounds can be correlated with the physical features and bioactivity of each composite formulation. Moreover, the HAp distribution within follows an auspicious density gradient tuned for hybrid osseous/cartilage matter architectures, which were mirrored in the mice model tests. Hence, the synthesis route of a natural polymer blend/hydroxyapatite-graphene oxide composite material is anticipated to emerge as influential formulation in bone tissue engineering. Acknowledgement: This work was supported by the project 'Work-based learning systems using entrepreneurship grants for doctoral and post-doctoral students' (Sisteme de invatare bazate pe munca prin burse antreprenor pentru doctoranzi si postdoctoranzi) - SIMBA, SMIS code 124705 and by a grant of the National Authority for Scientific Research and Innovation, Operational Program Competitiveness Axis 1 - Section E, Program co-financed from European Regional Development Fund 'Investments for your future' under the project number 154/25.11.2016, P_37_221/2015. The nano-CT experiments were possible due to European Regional Development Fund through Competitiveness Operational Program 2014-2020, Priority axis 1, ID P_36_611, MySMIS code 107066, INOVABIOMED.Keywords: biopolymer blend, ectopic osteoinduction, graphene oxide composite, hydroxyapatite
Procedia PDF Downloads 104244 Fabrication of Highly Conductive Graphene/ITO Transparent Bi-Film through Chemical Vapor Deposition (CVD) and Organic Additives-Free Sol-Gel Techniques
Authors: Bastian Waduge Naveen Harindu Hemasiri, Jae-Kwan Kim, Ji-Myon Lee
Abstract:
Indium tin oxide (ITO) remains the industrial standard transparent conducting oxides with better performances. Recently, graphene becomes as a strong material with unique properties to replace the ITO. However, graphene/ITO hybrid composite material is a newly born field in the electronic world. In this study, the graphene/ITO composite bi-film was synthesized by a two steps process. 10 wt.% tin-doped, ITO thin films were produced by an environmentally friendly aqueous sol-gel spin coating technique with economical salts of In(NO3)3.H2O and SnCl4 without using organic additives. The wettability and surface free energy (97.6986 mJ/m2) enhanced oxygen plasma treated glass substrates were used to form voids free continuous ITO film. The spin-coated samples were annealed at 600 0C for 1 hour under low vacuum conditions to obtained crystallized, ITO film. The crystal structure and crystalline phases of ITO thin films were analyzed by X-ray diffraction (XRD) technique. The Scherrer equation was used to determine the crystallite size. Detailed information about chemical composition and elemental composition of the ITO film were determined by X-ray photoelectron spectroscopy (XPS) and energy dispersive X-ray spectroscopy (EDX) coupled with FE-SEM respectively. Graphene synthesis was done under chemical vapor deposition (CVD) method by using Cu foil at 1000 0C for 1 min. The quality of the synthesized graphene was characterized by Raman spectroscopy (532nm excitation laser beam) and data was collected at room temperature and normal atmosphere. The surface and cross-sectional observation were done by using FE-SEM. The optical transmission and sheet resistance were measured by UV-Vis spectroscopy and four point probe head at room temperature respectively. Electrical properties were also measured by using V-I characteristics. XRD patterns reveal that the films contain the In2O3 phase only and exhibit the polycrystalline nature of the cubic structure with the main peak of (222) plane. The peak positions of In3d5/2 (444.28 eV) and Sn3d5/2 (486.7 eV) in XPS results indicated that indium and tin are in the oxide form only. The UV-visible transmittance shows 91.35 % at 550 nm with 5.88 x 10-3 Ωcm specific resistance. The G and 2D band in Raman spectroscopy of graphene appear at 1582.52 cm-1 and 2690.54 cm-1 respectively when the synthesized CVD graphene on SiO2/Si. The determined intensity ratios of 2D to G (I2D/IG) and D to G (ID/IG) were 1.531 and 0.108 respectively. However, the above-mentioned G and 2D peaks appear at 1573.57 cm-1 and 2668.14 cm-1 respectively when the CVD graphene on the ITO coated glass, the positions of G and 2D peaks were red shifted by 8.948 cm-1 and 22.396 cm-1 respectively. This graphene/ITO bi-film shows modified electrical properties when compares with sol-gel derived ITO film. The reduction of sheet resistance in the bi-film was 12.03 % from the ITO film. Further, the fabricated graphene/ITO bi-film shows 88.66 % transmittance at 550 nm wavelength.Keywords: chemical vapor deposition, graphene, ITO, Raman Spectroscopy, sol-gel
Procedia PDF Downloads 260243 Rapid, Automated Characterization of Microplastics Using Laser Direct Infrared Imaging and Spectroscopy
Authors: Andreas Kerstan, Darren Robey, Wesam Alvan, David Troiani
Abstract:
Over the last 3.5 years, Quantum Cascade Lasers (QCL) technology has become increasingly important in infrared (IR) microscopy. The advantages over fourier transform infrared (FTIR) are that large areas of a few square centimeters can be measured in minutes and that the light intensive QCL makes it possible to obtain spectra with excellent S/N, even with just one scan. A firmly established solution of the laser direct infrared imaging (LDIR) 8700 is the analysis of microplastics. The presence of microplastics in the environment, drinking water, and food chains is gaining significant public interest. To study their presence, rapid and reliable characterization of microplastic particles is essential. Significant technical hurdles in microplastic analysis stem from the sheer number of particles to be analyzed in each sample. Total particle counts of several thousand are common in environmental samples, while well-treated bottled drinking water may contain relatively few. While visual microscopy has been used extensively, it is prone to operator error and bias and is limited to particles larger than 300 µm. As a result, vibrational spectroscopic techniques such as Raman and FTIR microscopy have become more popular, however, they are time-consuming. There is a demand for rapid and highly automated techniques to measure particle count size and provide high-quality polymer identification. Analysis directly on the filter that often forms the last stage in sample preparation is highly desirable as, by removing a sample preparation step it can both improve laboratory efficiency and decrease opportunities for error. Recent advances in infrared micro-spectroscopy combining a QCL with scanning optics have created a new paradigm, LDIR. It offers improved speed of analysis as well as high levels of automation. Its mode of operation, however, requires an IR reflective background, and this has, to date, limited the ability to perform direct “on-filter” analysis. This study explores the potential to combine the filter with an infrared reflective surface filter. By combining an IR reflective material or coating on a filter membrane with advanced image analysis and detection algorithms, it is demonstrated that such filters can indeed be used in this way. Vibrational spectroscopic techniques play a vital role in the investigation and understanding of microplastics in the environment and food chain. While vibrational spectroscopy is widely deployed, improvements and novel innovations in these techniques that can increase the speed of analysis and ease of use can provide pathways to higher testing rates and, hence, improved understanding of the impacts of microplastics in the environment. Due to its capability to measure large areas in minutes, its speed, degree of automation and excellent S/N, the LDIR could also implemented for various other samples like food adulteration, coatings, laminates, fabrics, textiles and tissues. This presentation will highlight a few of them and focus on the benefits of the LDIR vs classical techniques.Keywords: QCL, automation, microplastics, tissues, infrared, speed
Procedia PDF Downloads 66242 Polymer Nanocomposite Containing Silver Nanoparticles for Wound Healing
Authors: Patrícia Severino, Luciana Nalone, Daniele Martins, Marco Chaud, Classius Ferreira, Cristiane Bani, Ricardo Albuquerque
Abstract:
Hydrogels produced with polymers have been used in the development of dressings for wound treatment and tissue revitalization. Our study on polymer nanocomposites containing silver nanoparticles shows antimicrobial activity and applications in wound healing. The effects are linked with the slow oxidation and Ag⁺ liberation to the biological environment. Furthermore, bacterial cell membrane penetration and metabolic disruption through cell cycle disarrangement also contribute to microbial cell death. The silver antimicrobial activity has been known for many years, and previous reports show that low silver concentrations are safe for human use. This work aims to develop a hydrogel using natural polymers (sodium alginate and gelatin) combined with silver nanoparticles for wound healing and with antimicrobial properties in cutaneous lesions. The hydrogel development utilized different sodium alginate and gelatin proportions (20:80, 50:50 and 80:20). The silver nanoparticles incorporation was evaluated at the concentrations of 1.0, 2.0 and 4.0 mM. The physico-chemical properties of the formulation were evaluated using ultraviolet-visible (UV-Vis) absorption spectroscopy, Fourier transform infrared (FTIR) spectroscopy, differential scanning calorimetry (DSC), and thermogravimetric (TG) analysis. The morphological characterization was made using transmission electron microscopy (TEM). Human fibroblast (L2929) viability assay was performed with a minimum inhibitory concentration (MIC) assessment as well as an in vivo cicatrizant test. The results suggested that sodium alginate and gelatin in the (80:20) proportion with 4 mM of AgNO₃ in the (UV-Vis) exhibited a better hydrogel formulation. The nanoparticle absorption spectra of this analysis showed a maximum band around 430 - 450 nm, which suggests a spheroidal form. The TG curve exhibited two weight loss events. DSC indicated one endothermic peak at 230-250 °C, due to sample fusion. The polymers acted as stabilizers of a nanoparticle, defining their size and shape. Human fibroblast viability assay L929 gave 105 % cell viability with a negative control, while gelatin presented 96% viability, alginate: gelatin (80:20) 96.66 %, and alginate 100.33 % viability. The sodium alginate:gelatin (80:20) exhibited significant antimicrobial activity, with minimal bacterial growth at a ratio of 1.06 mg.mL⁻¹ in Pseudomonas aeruginosa and 0.53 mg.mL⁻¹ in Staphylococcus aureus. The in vivo results showed a significant reduction in wound surface area. On the seventh day, the hydrogel-nanoparticle formulation reduced the total area of injury by 81.14 %, while control reached a 45.66 % reduction. The results suggest that silver-hydrogel nanoformulation exhibits potential for wound dressing therapeutics.Keywords: nanocomposite, wound healing, hydrogel, silver nanoparticle
Procedia PDF Downloads 101241 Techno Economic Analysis for Solar PV and Hydro Power for Kafue Gorge Power Station
Authors: Elvis Nyirenda
Abstract:
This research study work was done to evaluate and propose an optimum measure to enhance the uptake of clean energy technologies such as solar photovoltaics, the study also aims at enhancing the country’s energy mix from the overdependence on hydro power which is susceptible to droughts and climate change challenges The country in the years 2015 - 2016 and 2018 - 2019 had received rainfall below average due to climate change and a shift in the weather pattern; this resulted in prolonged power outages and load shedding for more than 10 hours per day. ZESCO Limited, the utility company that owns infrastructure in the generation, transmission, and distribution of electricity (state-owned), is seeking alternative sources of energy in order to reduce the over-dependence on hydropower stations. One of the alternative sources of energy is Solar Energy from the sun. However, solar power is intermittent in nature and to smoothen the load curve, investment in robust energy storage facilities is of great importance to enhance security and reliability of electricity supply in the country. The methodology of the study looked at the historical performance of the Kafue gorge upper power station and utilised the hourly generation figures as input data for generation modelling in Homer software. The average yearly demand was derived from the available data on the system SCADA. The two dams were modelled as natural battery with the absolute state of charging and discharging determined by the available water resource and the peak electricity demand. The software Homer Energy System is used to simulate the scheme incorporating a pumped storage facility and Solar photovoltaic systems. The pumped hydro scheme works like a natural battery for the conservation of water, with the only losses being evaporation and water leakages from the dams and the turbines. To address the problem of intermittency on the solar resource and the non-availability of water for hydropower generation, the study concluded that utilising the existing Hydro power stations, Kafue Gorge upper and Kafue Gorge Lower to work conjunctively with Solar energy will reduce power deficits and increase the security of supply for the country. An optimum capacity of 350MW of solar PV can be integrated while operating Kafue Gorge power station in both generating and pumping mode to enable efficient utilisation of water at Kafue Gorge upper Dam and Kafue Gorge Lower dam.Keywords: hydropower, solar power systems, energy storage, photovoltaics, solar irradiation, pumped hydro storage system, supervisory control and data acquisition, Homer energy
Procedia PDF Downloads 117240 The Role of University in High-Level Human Capital Cultivation in China’s West Greater Bay Area
Authors: Rochelle Yun Ge
Abstract:
University has played an active role in the country’s development in China. There has been an increasing research interest on the development of higher education cooperation, talent cultivation and attraction, and innovation in the regional development. The Triple Helix model, which indicates that regional innovation and development can be engendered by collaboration among university, industry and government, is often adopted as research framework. The research using triple helix model emphasizes the active and often leading role of university in knowledge-based economy. Within this framework, universities are conceptualized as key institutions of knowledge production, transmission and transference potentially making critical contributions to regional development. Recent research almost uniformly consistent in indicating the high-level research labours (i.e., doctoral, post-doctoral researchers and academics) as important actors in the innovation ecosystem with their cross-geographical human capital and resources presented. In 2019, the development of the Guangdong-Hong Kong-Macao Greater Bay Area (GBA) was officially launched as an important strategy by the Chinese government to boost the regional development of the Pearl River Delta and to support the realization of “One Belt One Road” strategy. Human Capital formation is at the center of this plan. One of the strategic goals of the GBA development is set to evolve into an international educational hub and innovation center with high-level talents. A number of policies have been issued to attract and cultivate human resources in different GBA cities, in particular for the high-level R&D (research and development) talents such as doctoral and post-doctoral researchers. To better understand the development of high-level talents hub in the GBA, more empirical considerations should be given to explore the approaches of talents cultivation and attraction in the GBA. What remains to explore is the ways to better attract, train, support and retain these talents in the cross-systems context. This paper aims to investigate the role of university in human capital development under China’s national agenda of GBA integration through the lens of universities and actors. Two flagship comprehensive universities are selected to be the cases and 30 interviews with university officials, research leaders, post-doctors and doctoral candidates are used for analysis. In particular, we look at in what ways have universities aligned their strategies and practices to the Chinese government’s GBA development strategy? What strategies and practices have been developed by universities for the cultivation and attraction of high-level research labor? And what impacts the universities have made for the regional development? The main arguments of this research highlights the specific ways in which universities in smaller sub-regions can collaborate in high-level human capital formation and the role policy can play in facilitating such collaborations.Keywords: university, human capital, regional development, triple-helix model
Procedia PDF Downloads 112