Search results for: target specificity
462 School and Family Impairment Associated with Childhood Anxiety Disorders: Examining Differences in Parent and Child Report
Authors: Melissa K. Hord, Stephen P. Whiteside
Abstract:
Impairment in functioning is a requirement for diagnosing psychopathology, identifying individuals in need of treatment, and documenting improvement with treatment. Further, identifying different types of functional impairment can guide educators and treatment providers. However, most assessment tools focus on symptom severity and few measures assess impairment associated with childhood anxiety disorders. The child- and parent-report versions of the Child Sheehan Disability Scale (CSDS) are measures that may provide useful information regarding impairment. The purpose of the present study is to examine whether children diagnosed with different anxiety disorders have greater impairment in school or home functioning based on self or parent report. The sample consisted of 844 children ages 5 to 19 years of age (mean 13.43, 61% female, 90.9% Caucasian), including 281 children diagnosed with obsessive compulsive disorder (OCD), 200 with generalized anxiety disorder (GAD), 176 with social phobia, 83 with separation anxiety, 61 with anxiety not otherwise specified (NOS), 30 with panic disorder, and 13 with panic with agoraphobia. To assess whether children and parents reported greater impairment in school or home functioning, a multivariate analysis of variance was conducted. (The assumptions of independence and homogeneity of variance were checked and met). A significant difference was found, Pillai's trace = .143, F (4, 28) = 4.19, p < .001, partial eta squared = .04. Post hoc comparisons using the Tukey HSD test indicated that children report significantly greater impairment in school with panic disorder (M=5.18, SD=3.28), social phobia (M=4.95, SD=3.20), and OCD (M=4.62, SD=3.32) compared to other diagnoses; whereas parents endorse significantly greater school impairment when their child has a social phobia (M=5.70, SD=3.39) diagnosis. Interestingly, both children and parents reported greater impairment in family functioning for an OCD (child report M=5.37, SD=3.20; parent report M=5.59, SD=3.38) diagnosis compared to other anxiety diagnoses. (Additional findings for the anxiety disorders associated with less impairment will also be presented). The results of the current study have important implications for educators and treatment providers who are working with anxious children. First, understanding that differences exist in how children and parents view impairment related to childhood anxiety can help those working with these families to be more sensitive during interactions. Second, evidence suggests that difficulties in one environment do not necessarily translate to another environment, thus caregivers may benefit from careful explanation of observations obtained by educators. Third, results support the use of the CSDS measure by treatment providers to identify impairment across environments in order to more effectively target interventions.Keywords: anxiety, childhood, impairment, school functioning
Procedia PDF Downloads 278461 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 60460 A Machine Learning Approach for Efficient Resource Management in Construction Projects
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management
Procedia PDF Downloads 40459 The Compliance of Safe-Work Behaviors among Undergraduate Nursing Students with Different Clinical Experiences
Authors: K. C. Wong, K. L. Siu, S. N. Ng, K. N. Yip, Y. Y. Yuen, K. W. Lee, K. W. Wong, C. C. Li, H. P. Lam
Abstract:
Background: Occupational injuries among nursing profession were found related to repeated bedside nursing care, such as transfer, lifting and manual handling patients from previous studies. Likewise, undergraduate nursing students are also exposed to potential safety hazard due to their similar work nature of registered nurses. Especially, those students who worked as Temporary undergraduate nursing students (TUNS) which is a part-time clinical job in hospitals in Hong Kong who mainly assisted in providing bedside cares appeared to at high risk of work-related injuries. Several studies suggested the level of compliance with safe work behaviors was highly associated with work-related injuries. Yet, it had been limitedly studied among nursing students. This study was conducted to assess and compare the compliance with safe work behaviors and the levels of awareness of different workplace safety issues between undergraduate nursing students with or without TUNS experiences. Methods: This is a quantitative descriptive study using convenience sampling. 362 undergraduate nursing students in Hong Kong were recruited. The Safe Work Behavior relating to Patient Handling (SWB-PH) was used to assess their compliance of safe-work behaviors and the level of awareness of different workplace safety issues. Results: The results showed that most of the participants (n=250, 69.1%) who were working as TUNS. However, students who worked as TUNS had significantly lower safe-work behaviors compliance (mean SWB-PH score = 3.64±0.54) than those did not worked as TUNS (SWB-PH score=4.21±0.54) (p<0.001). Particularly, these students had higher awareness to seek help and use assistive devices but lower awareness of workplace safety issues and awareness of proper work posture than students without TUNS experiences. The students with TUNS experiences had higher engagement in help-seeking behaviors might be possibly explained by their richer clinical experiences which served as a facilitator to seek help from clinical staff whenever necessary. Besides, these experienced students were more likely to bear risks for occupational injuries and worked alone when no available aid which might be related to the busy working environment, heightened work pressures and high expectations of TUNS. Eventually, students who worked as TUNS might target on completing the assigned tasks and gradually neglecting the occupational safety. Conclusion: The findings contributed to an understanding of the level of compliance with safe work behaviors among nursing students with different clinical experiences. The results might guide the modification of current safety protocols and the introduction of multiple clinical training courses to improve nursing student’s engagement in safe work behaviors.Keywords: Occupational safety, Safety compliance, Safe-work behavior, Nursing students
Procedia PDF Downloads 144458 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models
Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble
Abstract:
Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate
Procedia PDF Downloads 217457 Feminising Football and Its Fandom: The Ideological Construction of Women's Super League
Authors: Donna Woodhouse, Beth Fielding-Lloyd, Ruth Sequerra
Abstract:
This paper explores the structure and culture of the English Football Association (FA) the governing body of soccer in England, in relation to the development of the FA Women’s Super League (WSL). In doing so, it examines the organisation’s journey from banning the sport in 1921 to establishing the country’s first semi professional female soccer league in 2011. As the FA has a virtual monopoly on defining the structures of the elite game, we attempted to understand its behaviour in the context of broader issues of power, control and resistance by giving voice to the experiences of those affected by its decisions. Observations were carried out at 39 matches over three years. Semi structured interviews with 17 people involved in the women’s game, identified via snowball sampling, were also carried out. Transcripts accompanied detailed field notes and were inductively coded to identify themes. What emerged was the governing body’s desire to create a new product, jettisoning the long history of the women’s game in order to shape and control the sport in a way it is no longer able to, with the elite male club game. The League created was also shaped by traditional conceptualisations of gender, in terms of the portrayal of its style of play and target audience, setting increased participation and spectatorship targets as measures of ‘success’. The national governing body has demonstrated pseudo inclusion and a lack of enthusiasm for the implementation of equity reforms, driven by a belief that the organisation is already representative, fair and accessible. Despite a consistent external pressure, the Football Association is still dominated at its most senior levels by males. Via claiming to hold a monopoly on expertise around the sport, maintaining complex committee structures and procedures, and with membership rules rooted in the amateur game, it remains a deeply gendered organisation, resistant to structural and cultural change. In WSL, the FA's structure and culture have created a franchise over which it retains almost complete control, dictating the terms of conditions of entry and marginalising alternative voices. The organisation presents a feminised version of both play and spectatorship, portraying the sport as a distinct, and lesser, version of soccer.Keywords: football association, organisational culture, soccer, women’s super league
Procedia PDF Downloads 352456 Enhancing Disaster Resilience: Advanced Natural Hazard Assessment and Monitoring
Authors: Mariza Kaskara, Stella Girtsou, Maria Prodromou, Alexia Tsouni, Christodoulos Mettas, Stavroula Alatza, Kyriaki Fotiou, Marios Tzouvaras, Charalampos Kontoes, Diofantos Hadjimitsis
Abstract:
Natural hazard assessment and monitoring are crucial in managing the risks associated with fires, floods, and geohazards, particularly in regions prone to these natural disasters, such as Greece and Cyprus. Recent advancements in technology, developed by the BEYOND Center of Excellence of the National Observatory of Athens, have been successfully applied in Greece and are now set to be transferred to Cyprus. The implementation of these advanced technologies in Greece has significantly improved the country's ability to respond to these natural hazards. For wildfire risk assessment, a scalar wildfire occurrence risk index is created based on the predictions of machine learning models. Predicting fire danger is crucial for the sustainable management of forest fires as it provides essential information for designing effective prevention measures and facilitating response planning for potential fire incidents. A reliable forecast of fire danger is a key component of integrated forest fire management and is heavily influenced by various factors that affect fire ignition and spread. The fire risk model is validated by the sensitivity and specificity metric. For flood risk assessment, a multi-faceted approach is employed, including the application of remote sensing techniques, the collection and processing of data from the most recent population and building census, technical studies and field visits, as well as hydrological and hydraulic simulations. All input data are used to create precise flood hazard maps according to various flooding scenarios, detailed flood vulnerability and flood exposure maps, which will finally produce the flood risk map. Critical points are identified, and mitigation measures are proposed for the worst-case scenario, namely, refuge areas are defined, and escape routes are designed. Flood risk maps can assist in raising awareness and save lives. Validation is carried out through historical flood events using remote sensing data and records from the civil protection authorities. For geohazards monitoring (e.g., landslides, subsidence), Synthetic Aperture Radar (SAR) and optical satellite imagery are combined with geomorphological and meteorological data and other landslide/ground deformation contributing factors. To monitor critical infrastructures, including dams, advanced InSAR methodologies are used for identifying surface movements through time. Monitoring these hazards provides valuable information for understanding processes and could lead to early warning systems to protect people and infrastructure. Validation is carried out through both geotechnical expert evaluations and visual inspections. The success of these systems in Greece has paved the way for their transfer to Cyprus to enhance Cyprus's capabilities in natural hazard assessment and monitoring. This transfer is being made through capacity building activities, fostering continuous collaboration between Greek and Cypriot experts. Apart from the knowledge transfer, small demonstration actions are implemented to showcase the effectiveness of these technologies in real-world scenarios. In conclusion, the transfer of advanced natural hazard assessment technologies from Greece to Cyprus represents a significant step forward in enhancing the region's resilience to disasters. EXCELSIOR project funds knowledge exchange, demonstration actions and capacity-building activities and is committed to empower Cyprus with the tools and expertise to effectively manage and mitigate the risks associated with these natural hazards. Acknowledgement:Authors acknowledge the 'EXCELSIOR': ERATOSTHENES: Excellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment H2020 Widespread Teaming project.Keywords: earth observation, monitoring, natural hazards, remote sensing
Procedia PDF Downloads 41455 The Evolution of the Israel Defence Forces’ Information Operations: A Case Study of the Israel Defence Forces' Activities in the Information Domain 2006–2014
Authors: Teemu Saressalo
Abstract:
This article examines the evolution of the Israel Defence Forces’ information operation activities during an eight-year timespan from the 2006 war with Hezbollah to more recent operations such as Pillar of Defence and Protective Edge. To this end, the case study will show a change in the Israel Defence Forces’ activities in the information domain. In the 2006 war with Hezbollah in Lebanon, Israel inflicted enormous damage on the Lebanese infrastructure, leaving more than 1,200 people dead and 4,400 injured. Casualties among Hezbollah, Israel’s main adversary, were estimated to range from 250 to 700 fighters. Damage to the Lebanese infrastructure was estimated at over USD 2.5bn, with almost 2,000 houses and buildings damaged and destroyed. Even this amount of destruction did not force Hezbollah to yield and while both sides were claiming victory in the war, Israel paid a heavier price in political backlashes and loss of reputation, mainly due to failures in the media and the way in which the war was portrayed and perceived in Israel and abroad. Much of this can be credited to Hezbollah’s efficient use of the media, and Israel’s failure to do so. Israel managed the next conflict it was engaged in completely differently – it had learnt its lessons and built up new ways to counter its adversary’s propaganda and media operations. In Operation Cast Lead at the turn of 2009, Hamas, Israel’s adversary and Gaza’s dominating faction, was not able to utilize the media in the same way that Hezbollah had. By creating a virtual and physical barrier around the Gaza Strip, Israel almost totally denied its adversary access to the worldwide media, and by restricting the movement of journalists in the area, Israel could let its voice be heard above all. The operation Cast Lead began with a deception operation, which caught Hamas totally off guard. The 21-day campaign left the Gaza Strip devastated, but did not cause as much protest in Israel during the operation as the 2006 war did, mainly due to almost total Israeli dominance in the information dimension. The most important outcome from the Israeli perspective was the fact that Operation Cast Lead was assessed to be a success and the operation enjoyed domestic support along with support from many western nations, which had condemned Israeli actions in the 2006 war. Later conflicts have shown the same tendency towards virtually total dominance in the information domain, which has had an impact on target audiences across the world. Thus, it is clear that well-planned and conducted information operations are able to shape public opinion and influence decision-makers, although Israel might have been outpaced by its rivals.Keywords: Hamas, Hezbollah, information operations, Israel Defence Forces
Procedia PDF Downloads 240454 Interventional Radiology Perception among Medical Students
Authors: Shujon Mohammed Alazzam, Sarah Saad Alamer, Omar Hassan Kasule, Lama Suliman Aleid, Mohammad Abdulaziz Alakeel, Boshra Mosleh Alanazi, Abdullah Abdulelah Altowairqi, Yahya Ali Al-Asiri
Abstract:
Background: Interventional radiology (IR) is a specialized field within radiology that diagnose and treat several conditions through a minimally invasive surgical procedure that involves the use of various radiological techniques. In the last few years, the role of IR has expanded to include a variety of organ systems which have been led to an increase in demand for these Specialties. The level of knowledge regarding IR is relatively low in general. In this study, we aimed to investigate the perceptions of interventional radiology (IR) as a specialty among medical students and medical interns in Riyadh, Saudi Arabia. Methodology: This study was a cross section. The target population is medical students in January 2023 in Riyadh city, KSA. We used the questionnaire for face-to-face interviews with voluntary participants to assess their knowledge of Interventional radiology. Permission was taken from participants to use their information. Assuring them that the data in this study was used only for scientific purposes. Results: According to the inclusion criteria, a total of 314 students participated in the study. (49%) of the participants were in the preclinical years, and (51%) were in the clinical years. The findings indicate more than half of the students think that they had good information about IR (58%), while (42%) reported that they had poor information and knowledge about IR. Only (28%) of students were planning to take an elective and radiology rotation, (and 27%) said they would consider a career in IR. (73%) of the participants who would not consider a career in IR, the highest reasons in order were due to "I do not find it interesting" (45%), then "Radiation exposure" (14%). Around half (48%) thought that an IRs must complete a residency training program in both radiology and surgery, and just (36%) of the students believe that an IRs must finish training in radiology. Our data show the procedures performed by IRs that (66%) lower limb angioplasty and stenting (58%) Cardiac angioplasty or stenting. (68%) of the students were familiar with angioplasty. When asked about the source of exposure to angioplasty, the majority (46%) were from a cardiologist, (and 16%) were from the interventional radiologist. Regarding IR career prospects, (78%) of the students believe that IRs have good career prospects. In conclusion, our findings reveal that the perception and exposure to IR among medical students and interns are generally poor. This has a direct influence on the student's decision regarding IR as a career path. Recommendations to attract medical students and promote IR as a career should be increased knowledge among medical students and future physicians through early exposure to IR, and this will promote the specialty's growth; also, involvement of the Saudi Interventional Radiology Society and Radiological Society of Saudi Arabia is essential.Keywords: knowledge, medical students, perceptions, radiology, interventional radiology, Saudi Arabia
Procedia PDF Downloads 91453 Characterization of a Lipolytic Enzyme of Pseudomonas nitroreducens Isolated from Mealworm's Gut
Authors: Jung-En Kuan, Whei-Fen Wu
Abstract:
In this study, a symbiotic bacteria from yellow mealworm's (Tenebrio molitor) mid-gut was isolated with characteristics of growth on minimal-tributyrin medium. After a PCR-amplification of its 16s rDNA, the resultant nucleotide sequences were then analyzed by schemes of the phylogeny trees. Accordingly, it was designated as Pseudomonas nitroreducens D-01. Next, by searching the lipolytic enzymes in its protein data bank, one of those potential lipolytic α/β hydrolases was identified, again using PCR-amplification and nucleotide-sequencing methods. To construct an expression of this lipolytic gene in plasmids, the target-gene primers were then designed, carrying the C-terminal his-tag sequences. Using the vector pET21a, a recombinant lipolytic hydrolase D gene with his-tag nucleotides was successfully cloned into it, of which the lipolytic D gene is under a control of the T7 promoter. After transformation of the resultant plasmids into Eescherichia coli BL21 (DE3), an IPTG inducer was used for the induction of the recombinant proteins. The protein products were then purified by metal-ion affinity column, and the purified proteins were found capable of forming a clear zone on tributyrin agar plate. Shortly, its enzyme activities were determined by degradation of p-nitrophenyl ester(s), and the substantial yellow end-product, p-nitrophenol, was measured at O.D.405 nm. Specifically, this lipolytic enzyme efficiently targets p-nitrophenyl butyrate. As well, it shows the most reactive activities at 40°C, pH 8 in potassium phosphate buffer. In thermal stability assays, the activities of this enzyme dramatically drop when the temperature is above 50°C. In metal ion assays, MgCl₂ and NH₄Cl induce the enzyme activities while MnSO₄, NiSO₄, CaCl₂, ZnSO₄, CoCl₂, CuSO₄, FeSO₄, and FeCl₃ reduce its activities. Besides, NaCl has no effects on its enzyme activities. Most organic solvents decrease the activities of this enzyme, such as hexane, methanol, ethanol, acetone, isopropanol, chloroform, and ethyl acetate. However, its enzyme activities increase when DMSO exists. All the surfactants like Triton X-100, Tween 80, Tween 20, and Brij35 decrease its lipolytic activities. Using Lineweaver-Burk double reciprocal methods, the function of the enzyme kinetics were determined such as Km = 0.488 (mM), Vmax = 0.0644 (mM/min), and kcat = 3.01x10³ (s⁻¹), as well the total efficiency of kcat/Km is 6.17 x10³ (mM⁻¹/s⁻¹). Afterwards, based on the phylogenetic analyses, this lipolytic protein is classified to type IV lipase by its homologous conserved region in this lipase family.Keywords: enzyme, esterase, lipotic hydrolase, type IV
Procedia PDF Downloads 133452 Technical and Economic Potential of Partial Electrification of Railway Lines
Authors: Rafael Martins Manzano Silva, Jean-Francois Tremong
Abstract:
Electrification of railway lines allows to increase speed, power, capacity and energetic efficiency of rolling stocks. However, this process of electrification is complex and costly. An electrification project is not just about design of catenary. It also includes installation of structures around electrification, as substation installation, electrical isolation, signalling, telecommunication and civil engineering structures. France has more than 30,000 km of railways, whose only 53% are electrified. The others 47% of railways use diesel locomotive and represent only 10% of the circulation (tons.km). For this reason, a new type of electrification, less expensive than the usual, is requested to enable the modernization of these railways. One solution could be the use of hybrids trains. This technology opens up new opportunities for less expensive infrastructure development such as the partial electrification of railway lines. In a partially electrified railway, the power supply of theses hybrid trains could be made either by the catenary or by the on-board energy storage system (ESS). Thus, the on-board ESS would feed the energetic needs of the train along the non-electrified zones while in electrified zones, the catenary would feed the train and recharge the on-board ESS. This paper’s objective deals with the technical and economic potential identification of partial electrification of railway lines. This study provides different scenarios of electrification by replacing the most expensive places to electrify using on-board ESS. The target is to reduce the cost of new electrification projects, i.e. reduce the cost of electrification infrastructures while not increasing the cost of rolling stocks. In this study, scenarios are constructed in function of the electrification’s cost of each structure. The electrification’s cost varies considerably because of the installation of catenary support in tunnels, bridges and viaducts is much more expensive than in others zones of the railway. These scenarios will be used to describe the power supply system and to choose between the catenary and the on-board energy storage depending on the position of the train on the railway. To identify the influence of each partial electrification scenario in the sizing of the on-board ESS, a model of the railway line and of the rolling stock is developed for a real case. This real case concerns a railway line located in the south of France. The energy consumption and the power demanded at each point of the line for each power supply (catenary or on-board ESS) are provided at the end of the simulation. Finally, the cost of a partial electrification is obtained by adding the civil engineering costs of the zones to be electrified plus the cost of the on-board ESS. The study of the technical and economic potential ends with the identification of the most economically interesting scenario of electrification.Keywords: electrification, hybrid, railway, storage
Procedia PDF Downloads 431451 Gene Expression and Staining Agents: Exploring the Factors That Influence the Electrophoretic Properties of Fluorescent Proteins
Authors: Elif Tugce Aksun Tumerkan, Chris Lowe, Hannah Krupa
Abstract:
Fluorescent proteins are self-sufficient in forming chromophores with a visible wavelength from 3 amino acids sequence within their own polypeptide structure. This chromophore – a molecule that absorbs a photon of light and exhibits an energy transition equal to the energy of the absorbed photon. Fluorescent proteins (FPs) consisted of a chain of 238 amino acid residues and composed of 11 beta strands shaped in a cylinder surrounding an alpha helix structure. A better understanding of the system of the chromospheres and the increasing advance in protein engineering in recent years, the properties of FPs offers the potential for new applications. They have used sensors and probes in molecular biology and cell-based research that giving a chance to observe these FPs tagged cell localization, structural variation and movement. For clarifying functional uses of fluorescent proteins, electrophoretic properties of these proteins are one of the most important parameters. Sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) analysis is used for determining electrophoretic properties commonly. While there are many techniques are used for determining the functionality of protein-based research, SDS-PAGE analysis can only provide a molecular level assessment of the proteolytic fragments. Before SDS-PAGE analysis, fluorescent proteins need to successfully purified. Due to directly purification of the target, FPs is difficult from the animal, gene expression is commonly used which must be done by transformation with the plasmid. Furthermore, used gel within electrophoresis and staining agents properties have a key role. In this review, the different factors that have the impact on the electrophoretic properties of fluorescent proteins explored. Fluorescent protein separation and purification are the essential steps before electrophoresis that should be done very carefully. For protein purification, gene expression process and following steps have a significant function. For successful gene expression, the properties of selected bacteria for expression, used plasmid are essential. Each bacteria has own characteristics which are very sensitive to gene expression, also used procedure is the important factor for fluorescent protein expression. Another important factors are gel formula and used staining agents. Gel formula has an effect on the specific proteins mobilization and staining with correct agents is a key step for visualization of electrophoretic bands of protein. Visuality of proteins can be changed depending on staining reagents. Apparently, this review has emphasized that gene expression and purification have a stronger effect than electrophoresis protocol and staining agents.Keywords: cell biology, gene expression, staining agents, SDS-page
Procedia PDF Downloads 194450 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design
Authors: Emiliano Matta
Abstract:
Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.Keywords: amplitude-independent damping, homogeneous friction, pendulum nonlinear dynamics, structural control, vibration resonant absorbers
Procedia PDF Downloads 149449 Imaging of Underground Targets with an Improved Back-Projection Algorithm
Authors: Alireza Akbari, Gelareh Babaee Khou
Abstract:
Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.Keywords: algorithm, back-projection, GPR, remote sensing
Procedia PDF Downloads 453448 An Open Trial of Mobile-Assisted Cognitive Behavioral Therapy for Negative Symptoms in Schizophrenia: Pupillometry Predictors of Outcome
Authors: Eric Granholm, Christophe Delay, Jason Holden, Peter Link
Abstract:
Negative symptoms are an important unmet treatment needed for schizophrenia. We conducted an open trial of a novel blended intervention called mobile-assisted cognitive behavior therapy for negative symptoms (mCBTn). mCBTn is a weekly group therapy intervention combining in-person and smartphone-based CBT (CBT2go app) to improve experiential negative symptoms in people with schizophrenia. Both the therapy group and CBT2go app included recovery goal setting, thought challenging, scheduling of pleasurable activities and social interactions, and pleasure savoring interventions to modify defeatist attitudes, a target mechanism associated with negative symptoms, and improve experiential negative symptoms. We tested whether participants with schizophrenia or schizoaffective disorder (N=31) who met prospective criteria for persistent negative symptoms showed improvement in experiential negative symptoms. Retention was excellent (87% at 18 weeks) and severity of defeatist attitudes and motivation and pleasure negative symptoms declined significantly in mCBTn with large effect sizes. We also tested whether pupillary responses, a measure of cognitive effort, predicted improvement in negative symptoms mCBTn. Pupillary responses were recorded at baseline using a Tobii pupillometer during the digit span task with 3-, 6- and 9-digit spans. Mixed models showed that greater dilation during the task at baseline significantly predicted a greater reduction in experiential negative symptoms. Pupillary responses may provide a much-needed prognostic biomarker of which patients are most likely to benefit from CBT. Greater pupil dilation during a cognitive task predicted greater improvement in experiential negative symptoms. Pupil dilation has been linked to motivation and engagement of executive control, so these factors may contribute to benefits in interventions that train cognitive skills to manage negative thoughts and emotions. The findings suggest mCBTn is a feasible and effective treatment for experiential negative symptoms and justify a larger randomized controlled clinical trial. The findings also provide support for the defeatist attitude model of experiential negative symptoms and suggest that mobile-assisted interventions like mCBTn can strengthen and shorten intensive psychosocial interventions for schizophrenia.Keywords: cognitive-behavioral therapy, mobile interventions, negative symptoms, pupillometry schizophrenia
Procedia PDF Downloads 181447 Development of a Stable RNAi-Based Biological Control for Sheep Blowfly Using Bentonite Polymer Technology
Authors: Yunjia Yang, Peng Li, Gordon Xu, Timothy Mahony, Bing Zhang, Neena Mitter, Karishma Mody
Abstract:
Sheep flystrike is one of the most economically important diseases affecting the Australian sheep and wool industry (>356M/annually). Currently, control of Lucillia cuprina relies almost exclusively on chemicals controls and the parasite has developed resistance to nearly all control chemicals used in the past. It is therefore critical to develop an alternative solution for the sustainable control and management of flystrike. RNA interference (RNAi) technologies have been successfully explored in multiple animal industries for developing parasites controls. This research project aims to develop a RNAi based biological control for sheep blowfly. Double-stranded RNA (dsRNA) has already proven successful against viruses, fungi and insects. However, the environmental instability of dsRNA is a major bottleneck for successful RNAi. Bentonite polymer (BenPol) technology can overcome this problem, as it can be tuned for the controlled release of dsRNA in the gut challenging pH environment of the blowfly larvae, prolonging its exposure time to and uptake by target cells. To investigate the potential of BenPol technology for dsRNA delivery, four different BenPol carriers were tested for their dsRNA loading capabilities, and three of them were found to be capable of affording dsRNA stability under multiple temperatures (4°C, 22°C, 40°C, 55°C) in sheep serum. Based on stability results, dsRNA from potential targeted genes was loaded onto BenPol carriers and tested in larvae feeding assays, three genes resulting in knockdowns. Meanwhile, a primary blowfly embryo cell line (BFEC) derived from L. cuprina embryos was successfully established, aim for an effective insect cell model for testing RNAi efficacy for preliminary assessments and screening. The results of this study establish that the dsRNA is stable when loaded on BenPol particles, unlike naked dsRNA rapidly degraded in sheep serum. The stable nanoparticle delivery system offered by BenPol technology can protect and increase the inherent stability of dsRNA molecules at higher temperatures in a complex biological fluid like serum, providing promise for its future use in enhancing animal protection.Keywords: flystrike, RNA interference, bentonite polymer technology, Lucillia cuprina
Procedia PDF Downloads 92446 Clustering-Based Computational Workload Minimization in Ontology Matching
Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris
Abstract:
In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching
Procedia PDF Downloads 250445 Nanoimprinted-Block Copolymer-Based Porous Nanocone Substrate for SERS Enhancement
Authors: Yunha Ryu, Kyoungsik Kim
Abstract:
Raman spectroscopy is one of the most powerful techniques for chemical detection, but the low sensitivity originated from the extremely small cross-section of the Raman scattering limits the practical use of Raman spectroscopy. To overcome this problem, Surface Enhanced Raman Scattering (SERS) has been intensively studied for several decades. Because the SERS effect is mainly induced from strong electromagnetic near-field enhancement as a result of localized surface plasmon resonance of metallic nanostructures, it is important to design the plasmonic structures with high density of electromagnetic hot spots for SERS substrate. One of the useful fabrication methods is using porous nanomaterial as a template for metallic structure. Internal pores on a scale of tens of nanometers can be strong EM hotspots by confining the incident light. Also, porous structures can capture more target molecules than non-porous structures in a same detection spot thanks to the large surface area. Herein we report the facile fabrication method of porous SERS substrate by integrating solvent-assisted nanoimprint lithography and selective etching of block copolymer. We obtained nanostructures with high porosity via simple selective etching of the one microdomain of the diblock copolymer. Furthermore, we imprinted of the nanocone patterns into the spin-coated flat block copolymer film to make three-dimensional SERS substrate for the high density of SERS hot spots as well as large surface area. We used solvent-assisted nanoimprint lithography (SAIL) to reduce the fabrication time and cost for patterning BCP film by taking advantage of a solvent which dissolves both polystyrenre and poly(methyl methacrylate) domain of the block copolymer, and thus block copolymer film was molded under the low temperature and atmospheric pressure in a short time. After Ag deposition, we measured Raman intensity of dye molecules adsorbed on the fabricated structure. Compared to the Raman signals of Ag coated solid nanocone, porous nanocone showed 10 times higher Raman intensity at 1510 cm(-1) band. In conclusion, we fabricated porous metallic nanocone arrays with high density electromagnetic hotspots by templating nanoimprinted diblock copolymer with selective etching and demonstrated its capability as an effective SERS substrate.Keywords: block copolymer, porous nanostructure, solvent-assisted nanoimprint, surface-enhanced Raman spectroscopy
Procedia PDF Downloads 626444 Triazenes: Unearthing Their Hidden Arsenal Against Malaria and Microbial Menace
Authors: Frans J. Smit, Wisdom A. Munzeiwa, Hermanus C. M. Vosloo, Lyn-Marie Birkholtz, Richard K. Haynes
Abstract:
Malaria and antimicrobial infections remain significant global health concerns, necessitating the continuous search for novel therapeutic approaches. This abstract presents an overview of the potential use of triazenes as effective agents against malaria and various antimicrobial pathogens. Triazenes are a class of compounds characterized by a linear arrangement of three nitrogen atoms, rendering them structurally distinct from their cyclic counterparts. This study investigates the efficacy of triazenes against malaria and explores their antimicrobial activity. Preliminary results revealed significant antimalarial activity of the triazenes, as evidenced by in vitro screening against P. falciparum, the causative agent of malaria. Furthermore, the compounds exhibited broad-spectrum antimicrobial activity, indicating their potential as effective antimicrobial agents. These compounds have shown inhibitory effects on various essential enzymes and processes involved in parasite survival, replication, and transmission. The mechanism of action of triazenes against malaria involves interactions with critical molecular targets, such as enzymes involved in the parasite's metabolic pathways and proteins responsible for host cell invasion. The antimicrobial activity of the triazenes against bacteria and fungi was investigated through disc diffusion screening. The antimicrobial efficacy of triazenes has been observed against both Gram-positive and Gram-negative bacteria, as well as multidrug-resistant strains, making them potential candidates for combating drug-resistant infections. Furthermore, triazenes possess favourable physicochemical properties, such as good stability, solubility, and low toxicity, which are essential for drug development. The structural versatility of triazenes allows for the modification of their chemical composition to enhance their potency, selectivity, and pharmacokinetic properties. These modifications can be tailored to target specific pathogens, increasing the potential for personalized treatment strategies. In conclusion, this study highlights the potential of triazenes as promising candidates for the development of novel antimalarial and antimicrobial therapeutics. Further investigations are necessary to determine the structure-activity relationships and optimize the pharmacological properties of these compounds. The results warrant additional research, including MIC studies, to further explore the antimicrobial activity of the triazenes. Ultimately, these findings contribute to the development of more effective strategies for combating malaria and microbial infections.Keywords: malaria, anti-microbials, triazene, resistance
Procedia PDF Downloads 104443 A User-Directed Approach to Optimization via Metaprogramming
Authors: Eashan Hatti
Abstract:
In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.Keywords: optimization, metaprogramming, logic programming, abstraction
Procedia PDF Downloads 88442 Evaluation of the Photo Neutron Contamination inside and outside of Treatment Room for High Energy Elekta Synergy® Linear Accelerator
Authors: Sharib Ahmed, Mansoor Rafi, Kamran Ali Awan, Faraz Khaskhali, Amir Maqbool, Altaf Hashmi
Abstract:
Medical linear accelerators (LINAC’s) used in radiotherapy treatments produce undesired neutrons when they are operated at energies above 8 MeV, both in electron and photon configuration. Neutrons are produced by high-energy photons and electrons through electronuclear (e, n) a photonuclear giant dipole resonance (GDR) reactions. These reactions occurs when incoming photon or electron incident through the various materials of target, flattening filter, collimators, and other shielding components in LINAC’s structure. These neutrons may reach directly to the patient, or they may interact with the surrounding materials until they become thermalized. A work has been set up to study the effect of different parameter on the production of neutron around the room by photonuclear reactions induced by photons above ~8 MeV. One of the commercial available neutron detector (Ludlum Model 42-31H Neutron Detector) is used for the detection of thermal and fast neutrons (0.025 eV to approximately 12 MeV) inside and outside of the treatment room. Measurements were performed for different field sizes at 100 cm source to surface distance (SSD) of detector, at different distances from the isocenter and at the place of primary and secondary walls. Other measurements were performed at door and treatment console for the potential radiation safety concerns of the therapists who must walk in and out of the room for the treatments. Exposures have taken place from Elekta Synergy® linear accelerators for two different energies (10 MV and 18 MV) for a given 200 MU’s and dose rate of 600 MU per minute. Results indicates that neutron doses at 100 cm SSD depend on accelerator characteristics means jaw settings as jaws are made of high atomic number material so provides significant interaction of photons to produce neutrons, while doses at the place of larger distance from isocenter are strongly influenced by the treatment room geometry and backscattering from the walls cause a greater doses as compare to dose at 100 cm distance from isocenter. In the treatment room the ambient dose equivalent due to photons produced during decay of activation nuclei varies from 4.22 mSv.h−1 to 13.2 mSv.h−1 (at isocenter),6.21 mSv.h−1 to 29.2 mSv.h−1 (primary wall) and 8.73 mSv.h−1 to 37.2 mSv.h−1 (secondary wall) for 10 and 18 MV respectively. The ambient dose equivalent for neutrons at door is 5 μSv.h−1 to 2 μSv.h−1 while at treatment console room it is 2 μSv.h−1 to 0 μSv.h−1 for 10 and 18 MV respectively which shows that a 2 m thick and 5m longer concrete maze provides sufficient shielding for neutron at door as well as at treatment console for 10 and 18 MV photons.Keywords: equivalent doses, neutron contamination, neutron detector, photon energy
Procedia PDF Downloads 449441 Metal Binding Phage Clones in a Quest for Heavy Metal Recovery from Water
Authors: Tomasz Łęga, Marta Sosnowska, Mirosława Panasiuk, Lilit Hovhannisyan, Beata Gromadzka, Marcin Olszewski, Sabina Zoledowska, Dawid Nidzworski
Abstract:
Toxic heavy metal ion contamination of industrial wastewater has recently become a significant environmental concern in many regions of the world. Although the majority of heavy metals are naturally occurring elements found on the earth's surface, anthropogenic activities such as mining and smelting, industrial production, and agricultural use of metals and metal-containing compounds are responsible for the majority of environmental contamination and human exposure. The permissible limits (ppm) for heavy metals in food, water and soil are frequently exceeded and considered hazardous to humans, other organisms, and the environment as a whole. Human exposure to highly nickel-polluted environments causes a variety of pathologic effects. In 2008, nickel received the shameful name of “Allergen of the Year” (GILLETTE 2008). According to the dermatologist, the frequency of nickel allergy is still growing, and it can’t be explained only by fashionable piercing and nickel devices used in medicine (like coronary stents and endoprostheses). Effective remediation methods for removing heavy metal ions from soil and water are becoming increasingly important. Among others, methods such as chemical precipitation, micro- and nanofiltration, membrane separation, conventional coagulation, electrodialysis, ion exchange, reverse and forward osmosis, photocatalysis and polymer or carbon nanocomposite absorbents have all been investigated so far. The importance of environmentally sustainable industrial production processes and the conservation of dwindling natural resources has highlighted the need for affordable, innovative biosorptive materials capable of recovering specific chemical elements from dilute aqueous solutions. The use of combinatorial phage display techniques for selecting and recognizing material-binding peptides with a selective affinity for any target, particularly inorganic materials, has gained considerable interest in the development of advanced bio- or nano-materials. However, due to the limitations of phage display libraries and the biopanning process, the accuracy of molecular recognition for inorganic materials remains a challenge. This study presents the isolation, identification and characterisation of metal binding phage clones that preferentially recover nickel.Keywords: Heavy metal recovery, cleaning water, phage display, nickel
Procedia PDF Downloads 99440 Cardiac Pacemaker in a Patient Undergoing Breast Radiotherapy-Multidisciplinary Approach
Authors: B. Petrović, M. Petrović, L. Rutonjski, I. Djan, V. Ivanović
Abstract:
Objective: Cardiac pacemakers are very sensitive to radiotherapy treatment from two sources: electromagnetic influence from the medical linear accelerator producing ionizing radiation- influencing electronics within the pacemaker, and the absorption of dose to the device. On the other hand, patients with cardiac pacemakers at the place of a tumor are rather rare, and single clinic hardly has experience with the management of such patients. The widely accepted international guidelines for management of radiation oncology patients recommend that these patients should be closely monitored and examined before, during and after radiotherapy treatment by cardiologist, and their device and condition followed up. The number of patients having both cancer and pacemaker, is growing every year, as both cancer incidence, as well as cardiac diseases incidence, are inevitably growing figures. Materials and methods: Female patient, age 69, was diagnozed with valvular cardiomyopathy and got implanted a pacemaker in 2005 and prosthetic mitral valve in 1993 (cancer was diagnosed in 2012). She was stable cardiologically and came to radiation therapy department with the diagnosis of right breast cancer, with the tumor in upper lateral quadrant of the right breast. Since she had all lymph nodes positive (28 in total), she had to have irradiated the supraclavicular region, as well as the breast with the tumor bed. She previously received chemotherapy, approved by the cardiologist. The patient was estimated to be with the high risk as device was within the field of irradiation, and the patient had high dependence on her pacemaker. The radiation therapy plan was conducted as 3D conformal therapy. The delineated target was breast with supraclavicular region, where the pacemaker was actually placed, with the addition of a pacemaker as organ at risk, to estimate the dose to the device and its components as recommended, and the breast. The targets received both 50 Gy in 25 fractions (where 20% of a pacemaker received 50 Gy, and 60% of a device received 40 Gy). The electrode to the heart received between 1 Gy and 50 Gy. Verification of dose planned and delivered was performed. Results: Evaluation of the patient status according to the guidelines and especially evaluation of all associated risks to the patient during treatment was done. Patient was irradiated by prescribed dose and followed up for the whole year, with no symptoms of failure of the pacemaker device during, or after treatment in follow up period. The functionality of a device was estimated to be unchanged, according to the parameters (electrode impedance and battery energy). Conclusion: Patient was closely monitored according to published guidelines during irradiation and afterwards. Pacemaker irradiated with the full dose did not show any signs of failure despite recommendations data, but in correlation with other published data.Keywords: cardiac pacemaker, breast cancer, radiotherapy treatment planning, complications of treatment
Procedia PDF Downloads 440439 An Analytical Systematic Design Approach to Evaluate Ballistic Performance of Armour Grade AA7075 Aluminium Alloy Using Friction Stir Processing
Authors: Lahari Ramya Pa, Sudhakar Ib, Madhu Vc, Madhusudhan Reddy Gd, Srinivasa Rao E.
Abstract:
Selection of suitable armor materials for defense applications is very crucial with respect to increasing mobility of the systems as well as maintaining safety. Therefore, determining the material with the lowest possible areal density that resists the predefined threat successfully is required in armor design studies. A number of light metal and alloys are come in to forefront especially to substitute the armour grade steels. AA5083 aluminium alloy which fit in to the military standards imposed by USA army is foremost nonferrous alloy to consider for possible replacement of steel to increase the mobility of armour vehicles and enhance fuel economy. Growing need of AA5083 aluminium alloy paves a way to develop supplement aluminium alloys maintaining the military standards. It has been witnessed that AA 2xxx aluminium alloy, AA6xxx aluminium alloy and AA7xxx aluminium alloy are the potential material to supplement AA5083 aluminium alloy. Among those cited aluminium series alloys AA7xxx aluminium alloy (heat treatable) possesses high strength and can compete with armour grade steels. Earlier investigations revealed that layering of AA7xxx aluminium alloy can prevent spalling of rear portion of armour during ballistic impacts. Hence, present investigation deals with fabrication of hard layer (made of boron carbide) i.e. layer on AA 7075 aluminium alloy using friction stir processing with an intention of blunting the projectile in the initial impact and backing tough portion(AA7xxx aluminium alloy) to dissipate residual kinetic energy. An analytical approach has been adopted to unfold the ballistic performance of projectile. Penetration of projectile inside the armour has been resolved by considering by strain energy model analysis. Perforation shearing areas i.e. interface of projectile and armour is taken in to account for evaluation of penetration inside the armour. Fabricated surface composites (targets) were tested as per the military standard (JIS.0108.01) in a ballistic testing tunnel at Defence Metallurgical Research Laboratory (DMRL), Hyderabad in standardized testing conditions. Analytical results were well validated with experimental obtained one.Keywords: AA7075 aluminium alloy, friction stir processing, boron carbide, ballistic performance, target
Procedia PDF Downloads 331438 A Design Research Methodology for Light and Stretchable Electrical Thermal Warm-Up Sportswear to Enhance the Performance of Athletes against Harsh Environment
Authors: Chenxiao Yang, Li Li
Abstract:
In this decade, the sportswear market rapidly expanded while numerous sports brands are conducting fierce competitions to hold their market shares and trying to act as a leader in professional competition sports areas to set the trends. Thus, various advancing sports equipment is being deeply explored to improving athletes’ performance in fierce competitions. Although there is plenty protective equipment such as cuff, running legging, etc., on the market, there is still blank in the field of sportswear during prerace warm-up this important time gap, especially for those competitions host in cold environment. Because there is always time gaps between warm-up and race due to event logistics or unexpected weather factors. Athletes will be exposed to chilly condition for an unpredictable long period of time. As a consequence, the effects of warm-up will be negated, and the competition performance will be degraded. However, reviewing the current market, there is none effective sports equipment provided to help athletes against this harsh environment or the rare existing products are so blocky or heavy to restrict the actions. An ideal thermal-protective sportswear should be light, flexible, comfort and aesthetic at the same time. Therefore, this design research adopted the textile circular knitting methodology to integrate soft silver-coated conductive yarns (ab. SCCYs), elastic nylon yarn and polyester yarn to develop the proposed electrical, thermal sportswear, with the strengths aforementioned. Meanwhile, the relationship between heating performance, stretch load, and energy consumption were investigated. Further, a simulation model was established to ensure providing sufficient warm and flexibility at lower energy cost and with an optimized production, parameter determined. The proposed circular knitting technology and simulation model can be directly applied to instruct prototype developments to cater different target consumers’ needs and ensure prototypes’’ safety. On the other hand, high R&D investment and time consumption can be saved. Further, two prototypes: a kneecap and an elbow guard, were developed to facilitate the transformation of research technology into an industrial application and to give a hint on the blur future blueprint.Keywords: cold environment, silver-coated conductive yarn, electrical thermal textile, stretchable
Procedia PDF Downloads 269437 Anti-Arthritic Effect of a Herbal Diet Formula Comprising Fruits of Rosa Multiflora and Flowers of Lonicera Japonica
Authors: Brian Chi Yan Cheng, Hui Guo, Tao Su, Xiu‐qiong Fu, Ting Li, Zhi‐ling Yu
Abstract:
Rheumatoid arthritis (RA) affects around 1% of the globe population. Yet, there is still no cure for RA. Toll-like receptor 4 (TLR4) signalling has been found to be involved in the pathogenesis of RA, making it a potential therapeutic target for RA treatment. A herbal formula (RL) consisting of fruits of Rosa Multiflora (Eijitsu rose) and flowers of Lonicera Japonica (Japanese honeysuckle) has been used in treating various inflammatory disorders for more than a thousand year. Both of them are rich sources of nutrients and bioactive phytochemicals, which can be used in producing different food products and supplements. In this study, we would evaluate the anti-arthritic effect of RL on collagen-induced arthritis (CIA) in rats and investigate the involvement of TLR4 signaling in the mode of action of RL. Anti-arthritic efficacy was evaluated using CIA rats induced by bovine type II collagen. The treatment groups were treated with RL (82.5, 165, and 330 mg/kg bw per day, p.o.) or positive control indomethacin (0.25 mg/kg bw per day, p.o.) for 35 days. Clinical signs (hind paw volume and arthritis severity scores), changes in serum inflammatory mediators, pro-/antioxidant status, histological and radiographic changes of joints were investigated. Spleens and peritoneal macrophages were used to determine the effects of RL on innate and adaptive immune responses in CIA rats. The involvement of TLR4 signalling pathways in the anti-arthritic effect of RL was examined in cartilage tissue of CIA rats, murine RAW264.7 macrophages and human THP-1 monocytic cells. The severity of arthritis in the CIA rats was significantly attenuated by RL. Antioxidant status, histological score and radiographic score were efficiently improved by RL. RL could also dose-dependently inhibit pro-inflammatory cytokines in serum of CIA rats. RL significantly inhibited the production of various pro-inflammatory mediators, the expression and/or activity of the components of TLR4 signalling pathways in animal tissue and cell lines. RL possesses anti-arthritic effect on collagen-induced arthritis in rats. The therapeutic effect of RL may be related to its inhibition on pro-inflammatory cytokines in serum. The inhibition of the TAK1/NF-κB and TAK1/MAPK pathways participate in the anti-arthritic effects of RL. This provides a pharmacological justification for the dietary use of RL in the control of various arthritic diseases. Further investigation should be done to develop RL into a anti-arthritic food products and/or supplements.Keywords: japanese honeysuckle, rheumatoid arthritis, rosa multiflora, rosehip
Procedia PDF Downloads 433436 Optimization Based Design of Decelerating Duct for Pumpjets
Authors: Mustafa Sengul, Enes Sahin, Sertac Arslan
Abstract:
Pumpjets are one of the marine propulsion systems frequently used in underwater vehicles nowadays. The reasons for frequent use of pumpjet as a propulsion system are that it has higher relative efficiency at high speeds, better cavitation, and acoustic performance than its rivals. Pumpjets are composed of rotor, stator, and duct, and there are two different types of pumpjet configurations depending on the desired hydrodynamic characteristic, which are with accelerating and decelerating duct. Pumpjet with an accelerating channel is used at cargo ships where it works at low speeds and high loading conditions. The working principle of this type of pumpjet is to maximize the thrust by reducing the pressure of the fluid through the channel and throwing the fluid out from the channel with high momentum. On the other hand, for decelerating ducted pumpjets, the main consideration is to prevent the occurrence of the cavitation phenomenon by increasing the pressure of the fluid about the rotor region. By postponing the cavitation, acoustic noise naturally falls down, so decelerating ducted systems are used at noise-sensitive vehicle systems where acoustic performance is vital. Therefore, duct design becomes a crucial step during pumpjet design. This study, it is aimed to optimize the duct geometry of a decelerating ducted pumpjet for a highly speed underwater vehicle by using proper optimization tools. The target output of this optimization process is to obtain a duct design that maximizes fluid pressure around the rotor region to prevent from cavitation and minimizes drag force. There are two main optimization techniques that could be utilized for this process which are parameter-based optimization and gradient-based optimization. While parameter-based algorithm offers more major changes in interested geometry, which makes user to get close desired geometry, gradient-based algorithm deals with minor local changes in geometry. In parameter-based optimization, the geometry should be parameterized first. Then, by defining upper and lower limits for these parameters, design space is created. Finally, by proper optimization code and analysis, optimum geometry is obtained from this design space. For this duct optimization study, a commercial codedparameter-based optimization algorithm is used. To parameterize the geometry, duct is represented with b-spline curves and control points. These control points have x and y coordinates limits. By regarding these limits, design space is generated.Keywords: pumpjet, decelerating duct design, optimization, underwater vehicles, cavitation, drag minimization
Procedia PDF Downloads 209435 Oral Microbiota as a Novel Predictive Biomarker of Response To Immune Checkpoint Inhibitors in Advanced Non-small Cell Lung Cancer Patients
Authors: Francesco Pantano, Marta Fogolari, Michele Iuliani, Sonia Simonetti, Silvia Cavaliere, Marco Russano, Fabrizio Citarella, Bruno Vincenzi, Silvia Angeletti, Giuseppe Tonini
Abstract:
Background: Although immune checkpoint inhibitors (ICIs) have changed the treatment paradigm of non–small cell lung cancer (NSCLC), these drugs fail to elicit durable responses in the majority of NSCLC patients. The gut microbiota, able to regulate immune responsiveness, is emerging as a promising, modifiable target to improve ICIs response rates. Since the oral microbiome has been demonstrated to be the primary source of bacterial microbiota in the lungs, we investigated its composition as a potential predictive biomarker to identify and select patients who could benefit from immunotherapy. Methods: Thirty-five patients with stage IV squamous and non-squamous cell NSCLC eligible for an anti-PD-1/PD-L1 as monotherapy were enrolled. Saliva samples were collected from patients prior to the start of treatment, bacterial DNA was extracted using the QIAamp® DNA Microbiome Kit (QIAGEN) and the 16S rRNA gene was sequenced on a MiSeq sequencing instrument (Illumina). Results: NSCLC patients were dichotomized as “Responders” (partial or complete response) and “Non-Responders” (progressive disease), after 12 weeks of treatment, based on RECIST criteria. A prevalence of the phylum Candidatus Saccharibacteria was found in the 10 responders compared to non-responders (abundance 5% vs 1% respectively; p-value = 1.46 x 10-7; False Discovery Rate (FDR) = 1.02 x 10-6). Moreover, a higher prevalence of Saccharibacteria Genera Incertae Sedis genus (belonging to the Candidatus Saccharibacteria phylum) was observed in "responders" (p-value = 6.01 x 10-7 and FDR = 2.46 x 10-5). Finally, the patients who benefit from immunotherapy showed a significant abundance of TM7 Phylum Sp Oral Clone FR058 strain, member of Saccharibacteria Genera Incertae Sedis genus (p-value = 6.13 x 10-7 and FDR=7.66 x 10-5). Conclusions: These preliminary results showed a significant association between oral microbiota and ICIs response in NSCLC patients. In particular, the higher prevalence of Candidatus Saccharibacteria phylum and TM7 Phylum Sp Oral Clone FR058 strain in responders suggests their potential immunomodulatory role. The study is still ongoing and updated data will be presented at the congress.Keywords: oral microbiota, immune checkpoint inhibitors, non-small cell lung cancer, predictive biomarker
Procedia PDF Downloads 100434 Structure and Tribological Properties of Moisture Insensitivity Si Containing Diamond-Like Carbon Film
Authors: Mingjiang Dai, Qian Shi, Fang Hu, Songsheng Lin, Huijun Hou, Chunbei Wei
Abstract:
A diamond-like carbon (DLC) is considered as a promising protective film since its high hardness and excellent tribological properties. However, DLC films are very sensitive to the environmental condition, its friction coefficient could dramatic change in high humidity, therefore, limited their further application in aerospace, the watch industry, and micro/nano-electromechanical systems. Therefore, most studies focus on the low friction coefficient of DLC films at a high humid environment. However, this is out of satisfied in practical application. An important thing was ignored is that the DLC coated components are usually used in the diversed environment, which means its friction coefficient may evidently change in different humid condition. As a result, the invalidation of DLC coated components or even sometimes disaster occurred. For example, DLC coated minisize gears were used in the watch industry, and the customer may frequently transform their locations with different weather and humidity even in one day. If friction coefficient is not stable in dry and high moisture conditions, the watch will be inaccurate. Thus, it is necessary to investigate the stable tribological behavior of DLC films in various environments. In this study, a-C:H:Si films were deposited by multi-function magnetron sputtering system, containing one ion source device and a pair of SiC dual mid-frequent targets and two direct current Ti/C targets. Hydrogenated carbon layers were manufactured by sputtering the graphite target in argon and methane gasses. The silicon was doped in DLC coatings by sputtering silicon carbide targets and the doping content were adjusted by mid-frequent sputtering current. The microstructure of the film was characterized by Raman spectrometry, X-ray photoelectron spectroscopy, and transmission electron microscopy while its friction behavior under different humidity conditions was studied using a ball-on-disc tribometer. The a-C:H films with Si content from 0 to 17at.% were obtained and the influence of Si content on the structure and tribological properties under the relative humidity of 50% and 85% were investigated. Results show that the a-C:H:Si film has typical diamond-like characteristics, in which Si mainly existed in the form of Si, SiC, and SiO2. As expected, the friction coefficient of a-C:H films can be effectively changed after Si doping, from 0.302 to 0.176 in RH 50%. The further test shows that the friction coefficient value of a-C:H:Si film in RH 85% is first increase and then decrease as a function of Si content. We found that the a-C:H:Si films with a Si content of 3.75 at.% show a stable friction coefficient of 0.13 in different humidity environment. It is suggestion that the sp3/sp2 ratio of a-C:H films with 3.75 at.% Si was higher than others, which tend to form the silica-gel-like sacrificial layers during friction tests. Therefore, the films deliver stable low friction coefficient under controlled RH value of 50 and 85%.Keywords: diamond-like carbon, Si doping, moisture environment, table low friction coefficient
Procedia PDF Downloads 366433 Linkage Disequilibrium and Haplotype Blocks Study from Two High-Density Panels and a Combined Panel in Nelore Beef Cattle
Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari
Abstract:
Genotype imputation has been used to reduce genomic selections costs. In order to increase haplotype detection accuracy in methods that considers the linkage disequilibrium, another approach could be used, such as combined genotype data from different panels. Therefore, this study aimed to evaluate the linkage disequilibrium and haplotype blocks in two high-density panels before and after the imputation to a combined panel in Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip (IHD), wherein 93 animals (23 bulls and 70 progenies) were also genotyped with the Affymetrix Axion Genome-Wide BOS 1 Array Plate (AHD). After the quality control, 809 IHD animals (509,107 SNPs) and 93 AHD (427,875 SNPs) remained for analyses. The combined genotype panel (CP) was constructed by merging both panels after quality control, resulting in 880,336 SNPs. Imputation analysis was conducted using software FImpute v.2.2b. The reference (CP) and target (IHD) populations consisted of 23 bulls and 786 animals, respectively. The linkage disequilibrium and haplotype blocks studies were carried out for IHD, AHD, and imputed CP. Two linkage disequilibrium measures were considered; the correlation coefficient between alleles from two loci (r²) and the |D’|. Both measures were calculated using the software PLINK. The haplotypes' blocks were estimated using the software Haploview. The r² measurement presented different decay when compared to |D’|, wherein AHD and IHD had almost the same decay. For r², even with possible overestimation by the sample size for AHD (93 animals), the IHD presented higher values when compared to AHD for shorter distances, but with the increase of distance, both panels presented similar values. The r² measurement is influenced by the minor allele frequency of the pair of SNPs, which can cause the observed difference comparing the r² decay and |D’| decay. As a sum of the combinations between Illumina and Affymetrix panels, the CP presented a decay equivalent to a mean of these combinations. The estimated haplotype blocks detected for IHD, AHD, and CP were 84,529, 63,967, and 140,336, respectively. The IHD were composed by haplotype blocks with mean of 137.70 ± 219.05kb, the AHD with mean of 102.10kb ± 155.47, and the CP with mean of 107.10kb ± 169.14. The majority of the haplotype blocks of these three panels were composed by less than 10 SNPs, with only 3,882 (IHD), 193 (AHD) and 8,462 (CP) haplotype blocks composed by 10 SNPs or more. There was an increase in the number of chromosomes covered with long haplotypes when CP was used as well as an increase in haplotype coverage for short chromosomes (23-29), which can contribute for studies that explore haplotype blocks. In general, using CP could be an alternative to increase density and number of haplotype blocks, increasing the probability to obtain a marker close to a quantitative trait loci of interest.Keywords: Bos taurus indicus, decay, genotype imputation, single nucleotide polymorphism
Procedia PDF Downloads 281