Search results for: MATRIX method
13077 Development of a Regression Based Model to Predict Subjective Perception of Squeak and Rattle Noise
Authors: Ramkumar R., Gaurav Shinde, Pratik Shroff, Sachin Kumar Jain, Nagesh Walke
Abstract:
Advancements in electric vehicles have significantly reduced the powertrain noise and moving components of vehicles. As a result, in-cab noises have become more noticeable to passengers inside the car. To ensure a comfortable ride for drivers and other passengers, it has become crucial to eliminate undesirable component noises during the development phase. Standard practices are followed to identify the severity of noises based on subjective ratings, but it can be a tedious process to identify the severity of each development sample and make changes to reduce it. Additionally, the severity rating can vary from jury to jury, making it challenging to arrive at a definitive conclusion. To address this, an automotive component was identified to evaluate squeak and rattle noise issue. Physical tests were carried out for random and sine excitation profiles. Aim was to subjectively assess the noise using jury rating method and objectively evaluate the same by measuring the noise. Suitable jury evaluation method was selected for the said activity, and recorded sounds were replayed for jury rating. Objective data sound quality metrics viz., loudness, sharpness, roughness, fluctuation strength and overall Sound Pressure Level (SPL) were measured. Based on this, correlation co-efficients was established to identify the most relevant sound quality metrics that are contributing to particular identified noise issue. Regression analysis was then performed to establish the correlation between subjective and objective data. Mathematical model was prepared using artificial intelligence and machine learning algorithm. The developed model was able to predict the subjective rating with good accuracy.Keywords: BSR, noise, correlation, regression
Procedia PDF Downloads 7913076 Differentiated Surgical Treatment of Patients With Nontraumatic Intracerebral Hematomas
Authors: Mansur Agzamov, Valery Bersnev, Natalia Ivanova, Istam Agzamov, Timur Khayrullaev, Yulduz Agzamova
Abstract:
Objectives. Treatment of hypertensive intracerebral hematoma (ICH) is controversial. Advantage of one surgical method on other has not been established. Recent reports suggest a favorable effect of minimally invasive surgery. We conducted a small comparative study of different surgical methods. Methods. We analyzed the result of surgical treatment of 176 patients with intracerebral hematomas at the age from 41 to 78 years. Men were been113 (64.2%), women - 63 (35.8%). Level of consciousness: conscious -18, lethargy -63, stupor –55, moderate coma - 40. All patients on admission and in the dynamics underwent computer tomography (CT) of the brain. ICH was located in the putamen in 87 cases, thalamus in 19, in the mix area in 50, in the lobar area in 20. Ninety seven patients of them had an intraventricular hemorrhage component. The baseline volume of the ICH was measured according to a bedside method of measuring CT intracerebral hematomas volume. Depending on the intervention of the patients were divided into three groups. Group 1 patients, 90 patients, operated open craniotomy. Level of consciousness: conscious-11, lethargy-33, stupor–18, moderate coma -18. The hemorrhage was located in the putamen in 51, thalamus in 3, in the mix area in 25, in the lobar area in 11. Group 2 patients, 22 patients, underwent smaller craniotomy with endoscopic-assisted evacuation. Level of consciousness: conscious-4, lethargy-9, stupor–5, moderate coma -4. The hemorrhage was located in the putamen in 5, thalamus in 15, in the mix area in 2. Group 3 patients, 64 patients, was conducted minimally invasive removal of intracerebral hematomas using the original device (patent of Russian Federation № 65382). The device - funnel cannula - which after the special markings introduced into the hematoma cavity. Level of consciousness: conscious-3, lethargy-21, stupor–22, moderate coma -18. The hemorrhage was located in the putamen in 31, in the mix area in 23, thalamus in 1, in the lobar area in 9. Results of treatment were evaluated by Glasgow outcome scale. Results. The study showed that the results of surgical treatment in three groups depending on the degree of consciousness, the volume and localization of hematoma. In group 1, good recovery observed in 8 cases (8.9%), moderate disability in 22 (24.4%), severe disability - 17 (18.9%), death-43 (47.8%). In group 2, good recovery observed in 7 cases (31.8%), moderate disability in 7 (31.8%), severe disability - 5 (29.7%), death-7 (31.8%). In group 3, good recovery was observed in 9 cases (14.1%), moderate disability-17 (26.5%), severe disability-19 (29.7%), death-19 (29.7%). Conclusions. The method of using cannulae allowed to abandon from open craniotomy of the majority of patients with putaminal hematomas. Minimally invasive technique reduced the postoperative mortality and improves treatment outcomes of these patients.Keywords: nontraumatic intracerebral hematoma, minimal invasive surgical technique, funnel canula, differentiated surcical treatment
Procedia PDF Downloads 8413075 Building Transparent Supply Chains through Digital Tracing
Authors: Penina Orenstein
Abstract:
In today’s world, particularly with COVID-19 a constant worldwide threat, organizations need greater visibility over their supply chains more than ever before, in order to find areas for improvement and greater efficiency, reduce the chances of disruption and stay competitive. The concept of supply chain mapping is one where every process and route is mapped in detail between each vendor and supplier. The simplest method of mapping involves sourcing publicly available data including news and financial information concerning relationships between suppliers. An additional layer of information would be disclosed by large, direct suppliers about their production and logistics sites. While this method has the advantage of not requiring any input from suppliers, it also doesn’t allow for much transparency beyond the first supplier tier and may generate irrelevant data—noise—that must be filtered out to find the actionable data. The primary goal of this research is to build data maps of supply chains by focusing on a layered approach. Using these maps, the secondary goal is to address the question as to whether the supply chain is re-engineered to make improvements, for example, to lower the carbon footprint. Using a drill-down approach, the end result is a comprehensive map detailing the linkages between tier-one, tier-two, and tier-three suppliers super-imposed on a geographical map. The driving force behind this idea is to be able to trace individual parts to the exact site where they’re manufactured. In this way, companies can ensure sustainability practices from the production of raw materials through the finished goods. The approach allows companies to identify and anticipate vulnerabilities in their supply chain. It unlocks predictive analytics capabilities and enables them to act proactively. The research is particularly compelling because it unites network science theory with empirical data and presents the results in a visual, intuitive manner.Keywords: data mining, supply chain, empirical research, data mapping
Procedia PDF Downloads 17513074 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction
Authors: Alisawi Alaa T., Collins P. E. F.
Abstract:
The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard
Procedia PDF Downloads 10013073 Professional Learning, Professional Development and Academic Identity of Sessional Teachers: Underpinning Theoretical Frameworks
Authors: Aparna Datey
Abstract:
This paper explores the theoretical frameworks underpinning professional learning, professional development, and academic identity. The focus is on sessional teachers (also called tutors or adjuncts) in architectural design studios, who may be practitioners, masters or doctoral students and academics hired ‘as needed’. Drawing from Schön’s work on reflective practice, learning and developmental theories of Vygotsky (social constructionism and zones of proximal development), informal and workplace learning, this research proposes that sessional teachers not only develop their teaching skills but also shape their identities through their 'everyday' work. Continuing academic staff develop their teaching through a combination of active teaching, self-reflection on teaching, as well as learning to teach from others via formalised programs and informally in the workplace. They are provided professional development and recognised for their teaching efforts through promotion, student citations, and awards for teaching excellence. The teaching experiences of sessional staff, by comparison, may be discontinuous and they generally have fewer opportunities and incentives for teaching development. In the absence of access to formalised programs, sessional teachers develop their teaching informally in workplace settings that may be supportive or unhelpful. Their learning as teachers is embedded in everyday practice applying problem-solving skills in ambiguous and uncertain settings. Depending on their level of expertise, they understand how to teach a subject such that students are stimulated to learn. Adult learning theories posit that adults have different motivations for learning and fall into a matrix of readiness, that an adult’s ability to make sense of their learning is shaped by their values, expectations, beliefs, feelings, attitudes, and judgements, and they are self-directed. The level of expertise of sessional teachers depends on their individual attributes and motivations, as well as on their work environment, the good practices they acquire and enhance through their practice, career training and development, the clarity of their role in the delivery of teaching, and other factors. The architectural design studio is ideal for study due to the historical persistence of the vocational learning or apprenticeship model (learning under the guidance of experts) and a pedagogical format using two key approaches: project-based problem solving and collaborative learning. Hence, investigating the theoretical frameworks underlying academic roles and informal professional learning in the workplace would deepen understanding of their professional development and how they shape their academic identities. This qualitative research is ongoing at a major university in Australia, but the growing trend towards hiring sessional staff to teach core courses in many disciplines is a global one. This research will contribute to including transient sessional teachers in the discourse on institutional quality, effectiveness, and student learning.Keywords: academic identity, architectural design learning, pedagogy, teaching and learning, sessional teachers
Procedia PDF Downloads 12413072 Investigations of Bergy Bits and Ship Interactions in Extreme Waves Using Smoothed Particle Hydrodynamics
Authors: Mohammed Islam, Jungyong Wang, Dong Cheol Seo
Abstract:
The Smoothed Particle Hydrodynamics (SPH) method is a novel, meshless, and Lagrangian technique based numerical method that has shown promises to accurately predict the hydrodynamics of water and structure interactions in violent flow conditions. The main goal of this study is to build confidence on the versatility of the Smoothed Particle Hydrodynamics (SPH) based tool, to use it as a complementary tool to the physical model testing capabilities and support research need for the performance evaluation of ships and offshore platforms exposed to an extreme and harsh environment. In the current endeavor, an open-sourced SPH-based tool was used and validated for modeling and predictions of the hydrodynamic interactions of a 6-DOF ship and bergy bits. The study involved the modeling of a modern generic drillship and simplified bergy bits in floating and towing scenarios and in regular and irregular wave conditions. The predictions were validated using the model-scale measurements on a moored ship towed at multiple oblique angles approaching a floating bergy bit in waves. Overall, this study results in a thorough comparison between the model scale measurements and the prediction outcomes from the SPH tool for performance and accuracy. The SPH predicted ship motions and forces were primarily within ±5% of the measurements. The velocity and pressure distribution and wave characteristics over the free surface depicts realistic interactions of the wave, ship, and the bergy bit. This work identifies and presents several challenges in preparing the input file, particularly while defining the mass properties of complex geometry, the computational requirements, and the post-processing of the outcomes.Keywords: SPH, ship and bergy bit, hydrodynamic interactions, model validation, physical model testing
Procedia PDF Downloads 13313071 Development of Vacuum Planar Membrane Dehumidifier for Air-Conditioning
Authors: Chun-Han Li, Tien-Fu Yang, Chen-Yu Chen, Wei-Mon Yan
Abstract:
The conventional dehumidification method in air-conditioning system mostly utilizes a cooling coil to remove the moisture in the air via cooling the supply air down below its dew point temperature. During the process, it needs to reheat the supply air to meet the set indoor condition that consumes a considerable amount of energy and affect the coefficient of performance of the system. If the processes of dehumidification and cooling are separated and operated respectively, the indoor conditions will be more efficiently controlled. Therefore, decoupling the dehumidification and cooling processes in heating, ventilation and air conditioning system is one of the key technologies as membrane dehumidification processes for the next generation. The membrane dehumidification method has the advantages of low cost, low energy consumption, etc. It utilizes the pore size and hydrophilicity of the membrane to transfer water vapor by mass transfer effect. The moisture in the supply air is removed by the potential energy and driving force across the membrane. The process can save the latent load used to condense water, which makes more efficient energy use because it does not involve heat transfer effect. In this work, the performance measurements including the permeability and selectivity of water vapor and air with the composite and commercial membranes were conducted. According to measured data, we can choose the suitable dehumidification membrane for designing the flow channel length and components of the planar dehumidifier. The vacuum membrane dehumidification system was set up to examine the effects of temperature, humidity, vacuum pressure, flow rate, the coefficient of performance and other parameters on the dehumidification efficiency. The results showed that the commercial Nafion membrane has better water vapor permeability and selectivity. They are suitable for filtration with water vapor and air. Meanwhile, Nafion membrane has promising potential in the dehumidification process.Keywords: vacuum membrane dehumidification, planar membrane dehumidifier, water vapour and air permeability, air conditioning
Procedia PDF Downloads 14713070 Potentials of Henna Leaves as Dye and Its Fastness Properties on Fabric
Authors: Nkem Angela Udeani
Abstract:
Despite the widespread use of synthetic dyes, natural dyes are still exploited and used to enhance its inherent aesthetic qualities as a major material for the beautification of the body. Centuries before the discovery of synthetic dye, natural dyes were the only source of dye open to mankind. Dyes are extracted from plant - leaves, roots, and barks, insect secretions, and minerals. However, research findings have made it clear that of all, plant- leaves, roots, barks or flowers are the most explored and exploited. Henna (Lawsonia innermis) is one of those plants. The experiment has also shown that henna is used in body painting in conjunction with an alkaline (Ammonium Sulphate) as a fixing agent. This of course gives a clue that if colour derived from henna is properly investigated, it may not only be used as body decoration but possibly, may have affinity to fibre substrate. This paper investigates the dyeing potentials - dyeing ability and fastness qualities of henna dye extract on cotton and linen fibres using mordants like ammonium sulphate and other alkalies (hydrosulphate and caustic soda, potash, common salt and alum). Hot and cold water and ethanol solvent were used in the extraction of the dye to investigate the most effective method of extraction, dyeing ability and fastness qualities of these extracts under room temperature. The results of the experiment show that cotton have a high rate of dye intake than linen fibre. On a similar note, the colours obtained depend most on the solvent and or the mordant used. In conclusion, hot water extraction appear more effective. While the colours obtained from ethanol and both cold and hot method of extraction range from light to dark yellow, light green to army green, there are to some extent shades of brown hues.Keywords: dye, fabrics, henna leaves, potential
Procedia PDF Downloads 47213069 Sensitivity Analysis and Solitary Wave Solutions to the (2+1)-Dimensional Boussinesq Equation in Dispersive Media
Authors: Naila Nasreen, Dianchen Lu
Abstract:
This paper explores the dynamical behavior of the (2+1)-dimensional Boussinesq equation, which is a nonlinear water wave equation and is used to model wave packets in dispersive media with weak nonlinearity. This equation depicts how long wave made in shallow water propagates due to the influence of gravity. The (2+1)- dimensional Boussinesq equation combines the two-way propagation of the classical Boussinesq equation with the dependence on a second spatial variable, as that occurs in the two-dimensional Kadomstev- Petviashvili equation. This equation provides a description of head- on collision of oblique waves and it possesses some interesting properties. The governing model is discussed by the assistance of Ricatti equation mapping method, a relatively integration tool. The solutions have been extracted in different forms the solitary wave solutions as well as hyperbolic and periodic solutions. Moreover, the sensitivity analysis is demonstrated for the designed dynamical structural system’s wave profiles, where the soliton wave velocity and wave number parameters regulate the water wave singularity. In addition to being helpful for elucidating nonlinear partial differential equations, the method in use gives previously extracted solutions and extracts fresh exact solutions. Assuming the right values for the parameters, various graph in different shapes are sketched to provide information about the visual format of the earned results. This paper’s findings support the efficacy of the approach taken in enhancing nonlinear dynamical behavior. We believe this research will be of interest to a wide variety of engineers that work with engineering models. Findings show the effectiveness simplicity, and generalizability of the chosen computational approach, even when applied to complicated systems in a variety of fields, especially in ocean engineering.Keywords: (2+1)-dimensional Boussinesq equation, solitary wave solutions, Ricatti equation mapping approach, nonlinear phenomena
Procedia PDF Downloads 10113068 Comparison of Different Hydrograph Routing Techniques in XPSTORM Modelling Software: A Case Study
Authors: Fatema Akram, Mohammad Golam Rasul, Mohammad Masud Kamal Khan, Md. Sharif Imam Ibne Amir
Abstract:
A variety of routing techniques are available to develop surface runoff hydrographs from rainfall. The selection of runoff routing method is very vital as it is directly related to the type of watershed and the required degree of accuracy. There are different modelling softwares available to explore the rainfall-runoff process in urban areas. XPSTORM, a link-node based, integrated storm-water modelling software, has been used in this study for developing surface runoff hydrograph for a Golf course area located in Rockhampton in Central Queensland in Australia. Four commonly used methods, namely SWMM runoff, Kinematic wave, Laurenson, and Time-Area are employed to generate runoff hydrograph for design storm of this study area. In runoff mode of XPSTORM, the rainfall, infiltration, evaporation and depression storage for sub-catchments were simulated and the runoff from the sub-catchment to collection node was calculated. The simulation results are presented, discussed and compared. The total surface runoff generated by SWMM runoff, Kinematic wave and Time-Area methods are found to be reasonably close, which indicates any of these methods can be used for developing runoff hydrograph of the study area. Laurenson method produces a comparatively less amount of surface runoff, however, it creates highest peak of surface runoff among all which may be suitable for hilly region. Although the Laurenson hydrograph technique is widely acceptable surface runoff routing technique in Queensland (Australia), extensive investigation is recommended with detailed topographic and hydrologic data in order to assess its suitability for use in the case study area.Keywords: ARI, design storm, IFD, rainfall temporal pattern, routing techniques, surface runoff, XPSTORM
Procedia PDF Downloads 45313067 Proactive Change or Adaptive Response: A Study on the Impact of Digital Transformation Strategy Modes on Enterprise Profitability From a Configuration Perspective
Authors: Jing-Ma
Abstract:
Digital transformation (DT) is an important way for manufacturing enterprises to shape new competitive advantages, and how to choose an effective DT strategy is crucial for enterprise growth and sustainable development. Rooted in strategic change theory, this paper incorporates the dimensions of managers' digital cognition, organizational conditions, and external environment into the same strategic analysis framework and integrates the dynamic QCA method and PSM method to study the antecedent grouping of the DT strategy mode of manufacturing enterprises and its impact on corporate profitability based on the data of listed manufacturing companies in China from 2015 to 2019. We find that the synergistic linkage of different dimensional elements can form six equivalent paths of high-level DT, which can be summarized as the proactive change mode of resource-capability dominated as well as adaptive response mode such as industry-guided resource replenishment. Capacity building under complex environments, market-industry synergy-driven, forced adaptation under peer pressure, and the managers' digital cognition play a non-essential but crucial role in this process. Except for individual differences in the market industry collaborative driving mode, other modes are more stable in terms of individual and temporal changes. However, it is worth noting that not all paths that result in high levels of DT can contribute to enterprise profitability, but only high levels of DT that result from matching the optimization of internal conditions with the external environment, such as industry technology and macro policies, can have a significant positive impact on corporate profitability.Keywords: digital transformation, strategy mode, enterprise profitability, dynamic QCA, PSM approach
Procedia PDF Downloads 2413066 In Vitro Antibacterial Activity of Selected Tanzania Medicinal Plants
Authors: Mhuji Kilonzo, Patrick Ndakidemi, Musa Chacha
Abstract:
Objective: To evaluate antibacterial activity from four selected medicinal plants namely Mystroxylon aethiopicum, Lonchocarpus capassa, Albizia anthelmentica and Myrica salicifolia used for management of bacterial infection in Tanzania. Methods: Minimum Inhibitory Concentration (MIC) of plants extracts against the tested bacterial species was determined by using 96 wells microdilution method. In this method, 50 μL of nutrient broth were loaded in each well followed by 50 μL of extract (100 mg/mL) to make a final volume of 100 μL. Subsequently, 50 μL were transferred from first rows of each well to the second rows and the process was repeated down the columns to the last wells from which 50 μL were discarded. Thereafter, 50 μL of the selected bacterial suspension were added to each well thus making a final volume of 100 μL. The lowest concentration which showed no bacterial growth was considered as MIC. Results: It was revealed that L. capassa leaf ethyl acetate extract exhibited antibacterial activity against Salmonella kisarawe and Salmonella typhi with MIC values of 0.39 and 0.781 mg/mL respectively. Likewise, L. capassa root bark ethyl acetate extracts inhibited growth of S. typhi and E. coli with MIC values of 0.39 and 0.781 mg/mL respectively. The M. aethiopicum leaf and root bark chloroform extracts displayed antibacterial activity against S. kisarawe and S. typhi respectively with MIC value of 0.781 mg/mL. The M. salicifolia stem bark ethyl acetate exhibited antibacterial activity against P. aeruginosa with MIC value of 0.39 mg/mL whereas the methanolic stem and root bark of the same plant inhibited the growth of Proteus mirabilis and Klebsiella pneumoniae with MIC value of 0.781 mg/mL. Conclusion: It was concluded that M. aethiopicum, L. capassa, A. anthelmentica and M. salicifolia are potential source of antibacterial agents. Further studies to establish structures of antibacterial and evaluate active ingredients are recommended.Keywords: Albizia anthelmentica, Lonchocarpus capassa, Mystroxylon aethiopicum, Myrica salicifolia
Procedia PDF Downloads 21913065 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing
Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari
Abstract:
A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.Keywords: bacteria chromosome, bacterial identification, sequence, primer generation
Procedia PDF Downloads 19313064 Production of Nanocomposite Electrical Contact Materials Ag-SnO2, W-Cu and Cu-C in Thermal Plasma
Authors: A. V. Samokhin, A. A. Fadeev, M. A. Sinaiskii, N. V. Alekseev, A. V. Kolesnikov
Abstract:
Composite materials where metal matrix is reinforced by ceramic or metal particles are of great interest for use in the manufacturing of electrical contacts. Significant improvement of the composite physical and mechanical properties as well as increase of the performance parameters of composite-based products can be achieved if the nanoscale structure in the composite materials is obtained by using nanosized powders as starting components. The results of nanosized composite powders synthesis (Ag-SnO2, W-Cu and Cu-C) in the DC thermal plasma flows are presented in this paper. The investigations included the following processes: - Recondensation of micron powder mixture Ag + SnO2 in a nitrogen plasma; - The reduction of the oxide powders mixture (WO3 + CuO) in a hydrogen-nitrogen plasma; - Decomposition of the copper formate and copper acetate powders in nitrogen plasma. The calculations of equilibrium compositions of multicomponent systems Ag-Sn-O-N, W-Cu-O-H-N and Cu-O-C-H-N in the temperature range of 400-5000 K were carried to estimate basic process characteristics. Experimental studies of the processes were performed using a plasma reactor with a confined jet flow. The plasma jet net power was in the range of 2 - 13 kW, and the feedstock flow rate was up to 0.35 kg/h. The obtained powders were characterized by TEM, HR-TEM, SEM, EDS, ED-XRF, XRD, BET and QEA methods. Nanocomposite Ag-SnO2 (12 wt. %). Processing of the initial powder mixture (Ag-SnO2) in nitrogen thermal plasma stream allowed to produce nanopowders with a specific surface area up to 24 m2/g, consisting predominantly of particles with size less than 100 nm. According to XRD results, tin was present in the obtained products as SnO2 phase, and also as intermetallic phases AgxSn. Nanocomposite W-Cu (20 wt .%). Reduction of (WO3+CuO) mixture in the hydrogen-nitrogen plasma provides W-Cu nanopowder with particle sizes in the range of 10-150 nm. The particles have mainly spherical shape and structure tungsten core - copper shell. The thickness of the shell is about several nanometers, the shell is composed of copper and its oxides (Cu2O, CuO). The nanopowders had 1.5 wt. % oxygen impurity. Heat treatment in a hydrogen atmosphere allows to reduce the oxygen content to less than 0.1 wt. %. Nanocomposite Cu-C. Copper nanopowders were found as products of the starting copper compounds decomposition. The nanopowders primarily had a spherical shape with a particle size of less than 100 nm. The main phase was copper, with small amount of Cu2O and CuO oxides. Copper formate decomposition products had a specific surface area 2.5-7 m2/g and contained 0.15 - 4 wt. % carbon; and copper acetate decomposition products had the specific surface area 5-35 m2/g, and carbon content of 0.3 - 5 wt. %. Compacting of nanocomposites (sintering in hydrogen for Ag-SnO2 and electric spark sintering (SPS) for W-Cu) showed that the samples having a relative density of 97-98 % can be obtained with a submicron structure. The studies indicate the possibility of using high-intensity plasma processes to create new technologies to produce nanocomposite materials for electric contacts.Keywords: electrical contact, material, nanocomposite, plasma, synthesis
Procedia PDF Downloads 23513063 Mining Scientific Literature to Discover Potential Research Data Sources: An Exploratory Study in the Field of Haemato-Oncology
Authors: A. Anastasiou, K. S. Tingay
Abstract:
Background: Discovering suitable datasets is an important part of health research, particularly for projects working with clinical data from patients organized in cohorts (cohort data), but with the proliferation of so many national and international initiatives, it is becoming increasingly difficult for research teams to locate real world datasets that are most relevant to their project objectives. We present a method for identifying healthcare institutes in the European Union (EU) which may hold haemato-oncology (HO) data. A key enabler of this research was the bibInsight platform, a scientometric data management and analysis system developed by the authors at Swansea University. Method: A PubMed search was conducted using HO clinical terms taken from previous work. The resulting XML file was processed using the bibInsight platform, linking affiliations to the Global Research Identifier Database (GRID). GRID is an international, standardized list of institutions, including the city and country in which the institution exists, as well as a category of the main business type, e.g., Academic, Healthcare, Government, Company. Countries were limited to the 28 current EU members, and institute type to 'Healthcare'. An article was considered valid if at least one author was affiliated with an EU-based healthcare institute. Results: The PubMed search produced 21,310 articles, consisting of 9,885 distinct affiliations with correspondence in GRID. Of these articles, 760 were from EU countries, and 390 of these were healthcare institutes. One affiliation was excluded as being a veterinary hospital. Two EU countries did not have any publications in our analysis dataset. The results were analysed by country and by individual healthcare institute. Networks both within the EU and internationally show institutional collaborations, which may suggest a willingness to share data for research purposes. Geographical mapping can ensure that data has broad population coverage. Collaborations with industry or government may exclude healthcare institutes that may have embargos or additional costs associated with data access. Conclusions: Data reuse is becoming increasingly important both for ensuring the validity of results, and economy of available resources. The ability to identify potential, specific data sources from over twenty thousand articles in less than an hour could assist in improving knowledge of, and access to, data sources. As our method has not yet specified if these healthcare institutes are holding data, or merely publishing on that topic, future work will involve text mining of data-specific concordant terms to identify numbers of participants, demographics, study methodologies, and sub-topics of interest.Keywords: data reuse, data discovery, data linkage, journal articles, text mining
Procedia PDF Downloads 11513062 Predictive Factors of Prognosis in Acute Stroke Patients Receiving Traditional Chinese Medicine Therapy: A Retrospective Study
Authors: Shaoyi Lu
Abstract:
Background: Traditional Chinese medicine has been used to treat stroke, which is a major cause of morbidity and mortality. There is, however, no clear agreement about the optimal timing, population, efficacy, and predictive prognosis factors of traditional Chinese medicine supplemental therapy. Method: In this study, we used a retrospective analysis with data collection from stroke patients in Stroke Registry In Chang Gung Healthcare System (SRICHS). Stroke patients who received traditional Chinese medicine consultation in neurology ward of Keelung Chang Gung Memorial Hospital from Jan 2010 to Dec 2014 were enrolled. Clinical profiles including the neurologic deficit, activities of daily living and other basic characteristics were analyzed. Through propensity score matching, we compared the NIHSS and Barthel index before and after the hospitalization, and applied with subgroup analysis, and adjusted by multivariate regression method. Results: Totally 115 stroke patients were enrolled with experiment group in 23 and control group in 92. The most important factor for prognosis prediction were the scores of National Institutes of Health Stroke Scale and Barthel index right before the hospitalization. Traditional Chinese medicine intervention had no statistically significant influence on the neurological deficit of acute stroke patients, and mild negative influence on daily activity performance of acute hemorrhagic stroke patient. Conclusion: Efficacy of traditional Chinese medicine as a supplemental therapy for acute stroke patients was controversial. The reason for this phenomenon might be complex and require more research to comprehend. Key words: traditional Chinese medicine, acupuncture, Stroke, NIH stroke scale, Barthel index, predictive factor. Method: In this study, we used a retrospective analysis with data collection from stroke patients in Stroke Registry In Chang Gung Healthcare System (SRICHS). Stroke patients who received traditional Chinese medicine consultation in neurology ward of Keelung Chang Gung Memorial Hospital from Jan 2010 to Dec 2014 were enrolled. Clinical profiles including the neurologic deficit, activities of daily living and other basic characteristics were analyzed. Through propensity score matching, we compared the NIHSS and Barthel index before and after the hospitalization, and applied with subgroup analysis, and adjusted by multivariate regression method. Results: Totally 115 stroke patients were enrolled with experiment group in 23 and control group in 92. The most important factor for prognosis prediction were the scores of National Institutes of Health Stroke Scale and Barthel index right before the hospitalization. Traditional Chinese medicine intervention had no statistically significant influence on the neurological deficit of acute stroke patients, and mild negative influence on daily activity performance of acute hemorrhagic stroke patient. Conclusion: Efficacy of traditional Chinese medicine as a supplemental therapy for acute stroke patients was controversial. The reason for this phenomenon might be complex and require more research to comprehend.Keywords: traditional Chinese medicine, complementary and alternative medicine, stroke, acupuncture
Procedia PDF Downloads 36013061 Qualitative Study of Organizational Variables Affecting Nurses’ Resilience in Pandemic Condition
Authors: Zahra Soltani Shal
Abstract:
Introduction: The COVID-19 pandemic marks an extraordinary global public health crisis unseen in the last century, with its rapid spread worldwide and associated mortality burden. Healthcare resilience during a pandemic is crucial not only for continuous and safe patients care but also for control of any outbreak. Aim: The present study was conducted to discover the organizational variables effective in increasing resilience and continuing the work of nurses in critical and stressful pandemic conditions. Method: The study population is nurses working in hospitals for patients with coronavirus. Sampling was done purposefully and information was collected from 15 nurses through In-depth semi-structured interviews. The interview was conducted to analyze the data using the framework analysis method consisting of five steps and is classified in the table. Results: According to the findings through semi-structural interviews, among organizational variables, organizational commitment (Affective commitment, continuous commitment, normative commitment) has played a prominent role in nurses' resilience. Discussion: despite the non-withdrawal of nurses and their resilience, due to the negative quality of their working life, the mentioned variable has affected their level of performance and ability and leads to fatigue and physical and mental exhaustion. Implications for practice: By equipping hospitals and improving the facilities of nurses, their organizational commitment can be increased and lead to their resilience in critical situations. Supervisors and senior officials at the hospitals should be responsible for nurses' health and safety. A clear and codified program in critical situations and comprehensive management is effective in improving the quality of the work-life of nurses. Creating an empathetic and interactive environment can help promote nurses' mental health.Keywords: organizational commitment, quality of work life, nurses resilience, pandemic, coronavirus
Procedia PDF Downloads 16213060 Vulnerability Assessment of Vertically Irregular Structures during Earthquake
Authors: Pranab Kumar Das
Abstract:
Vulnerability assessment of buildings with irregularity in the vertical direction has been carried out in this study. The constructions of vertically irregular buildings are increasing in the context of fast urbanization in the developing countries including India. During two reconnaissance based survey performed after Nepal earthquake 2015 and Imphal (India) earthquake 2016, it has been observed that so many structures are damaged due to the vertically irregular configuration. These irregular buildings are necessary to perform safely during seismic excitation. Therefore, it is very urgent demand to point out the actual vulnerability of the irregular structure. So that remedial measures can be taken for protecting those structures during natural hazard as like earthquake. This assessment will be very helpful for India and as well as for the other developing countries. A sufficient number of research has been contributed to the vulnerability of plan asymmetric buildings. In the field of vertically irregular buildings, the effort has not been forwarded much to find out their vulnerability during an earthquake. Irregularity in vertical direction may be caused due to irregular distribution of mass, stiffness and geometrically irregular configuration. Detailed analysis of such structures, particularly non-linear/ push over analysis for performance based design seems to be challenging one. The present paper considered a number of models of irregular structures. Building models made of both reinforced concrete and brick masonry are considered for the sake of generality. The analyses are performed with both help of finite element method and computational method.The study, as a whole, may help to arrive at a reasonably good estimate, insight for fundamental and other natural periods of such vertically irregular structures. The ductility demand, storey drift, and seismic response study help to identify the location of critical stress concentration. Summarily, this paper is a humble step for understanding the vulnerability and framing up the guidelines for vertically irregular structures.Keywords: ductility, stress concentration, vertically irregular structure, vulnerability
Procedia PDF Downloads 22913059 The Use of Brachytherapy in the Treatment of Liver Metastases: A Systematic Review
Authors: Mateusz Bilski, Jakub Klas, Emilia Kowalczyk, Sylwia Koziej, Katarzyna Kulszo, Ludmiła Grzybowska- Szatkowska
Abstract:
Background: Liver metastases are a common complication of primary solid tumors and sig-nificantly reduce patient survival. In the era of increasing diagnosis of oligometastatic disease and oligoprogression, methods of local treatment of metastases, i.e. MDT, are becoming more important. Implementation of such treatment can be considered for liver metastases, which are a common complication of primary solid tumors and significantly reduce patient survival. To date, the mainstay of treatment for oligometastatic disease has been surgical resection, but not all patients qualify for the procedure. As an alternative to surgical resection, radiotherapy techniques have become available, including stereotactic body radiation therapy (SBRT) or high-dose interstitial brachytherapy (iBT). iBT is an invasive method that emits very high doses of radiation from the inside of the tumor to the outside. This technique provides better tumor coverage than SBRT while having little impact on surrounding healthy tissue and elim-inates some concerns involving respiratory motion. Methods: We conducted a systematic re-view of the scientific literature on the use of brachytherapy in the treatment of liver metasta-ses from 2018 - 2023 using PubMed and ResearchGate browsers according to PRISMA rules. Results: From 111 articles, 18 publications containing information on 729 patients with liver metastases were selected. iBT has been shown to provide high rates of tumor control. Among 14 patients with 54 unresectable RCC liver metastases, after iBT LTC was 92.6% during a median follow-up of 10.2 months, PFS was 3.4 months. In analysis of 167 patients after treatment with a single fractional dose of 15-25 Gy with brachytherapy at 6- and 12-month follow-up, LRFS rates of 88,4-88.7% and 70.7 - 71,5%, PFS of 78.1 and 53.8%, and OS of 92.3 - 96.7% and 76,3% - 79.6%, respectively, were achieved. No serious complications were observed in all patients. Distant intrahepatic progression occurred later in patients with unre-sectable liver metastases after brachytherapy (PFS: 19.80 months) than in HCC patients (PFS: 13.50 months). A significant difference in LRFS between CRC patients (84.1% vs. 50.6%) and other histologies (92.4% vs. 92.4%) was noted, suggesting a higher treatment dose is necessary for CRC patients. The average target dose for metastatic colorectal cancer was 40 - 60 Gy (compared to 100 - 250 Gy for HCC). To better assess sensitivity to therapy and pre-dict side effects, it has been suggested that humoral mediators be evaluated. It was also shown that baseline levels of TNF-α, MCP-1 and VEGF, as well as NGF and CX3CL corre-lated with both tumor volume and radiation-induced liver damage, one of the most serious complications of iBT, indicating their potential role as biomarkers of therapy outcome. Con-clusions: The use of brachytherapy methods in the treatment of liver metastases of various cancers appears to be an interesting and relatively safe therapeutic method alternative to sur-gery. An important challenge remains the selection of an appropriate brachytherapy method and radiation dose for the corresponding initial tumor type from which the metastasis origi-nated.Keywords: liver metastases, brachytherapy, CT-HDRBT, iBT
Procedia PDF Downloads 11413058 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas
Authors: Michel Soto Chalhoub
Abstract:
Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra
Procedia PDF Downloads 23213057 The Effect of Reaction Time on the Morphology and Phase of Quaternary Ferrite Nanoparticles (FeCoCrO₄) Synthesised from a Single Source Precursor
Authors: Khadijat Olabisi Abdulwahab, Mohammad Azad Malik, Paul O'Brien, Grigore Timco, Floriana Tuna
Abstract:
The synthesis of spinel ferrite nanoparticles with a narrow size distribution is very crucial in their numerous applications including information storage, hyperthermia treatment, drug delivery, contrast agent in magnetic resonance imaging, catalysis, sensors, and environmental remediation. Ferrites have the general formula MFe₂O₄ (M = Fe, Co, Mn, Ni, Zn e.t.c) and possess remarkable electrical and magnetic properties which depend on the cations, method of preparation, size and their site occupancies. To the best of our knowledge, there are no reports on the use of a single source precursor to synthesise quaternary ferrite nanoparticles. Here in, we demonstrated the use of trimetallic iron pivalate cluster [CrCoFeO(O₂CᵗBu)₆(HO₂CᵗBu)₃] as a single source precursor to synthesise monodisperse cobalt chromium ferrite (FeCoCrO₄) nanoparticles by the hot injection thermolysis method. The precursor was thermolysed in oleylamine, oleic acid, with diphenyl ether as solvent at 260 °C. The effect of reaction time on the stoichiometry, phases or morphology of the nanoparticles was studied. The p-XRD patterns of the nanoparticles obtained after one hour was pure phase of cubic iron cobalt chromium ferrite (FeCoCrO₄). TEM showed that a more monodispersed spherical ferrite nanoparticles were obtained after one hour. Magnetic measurements revealed that the ferrite particles are superparamagnetic at room temperature. The nanoparticles were characterised by Powder X-ray Diffraction (p-XRD), Transmission Electron Microscopy (TEM), Energy Dispersive Spectroscopy (EDS) and Super Conducting Quantum Interference Device (SQUID).Keywords: cobalt chromium ferrite, colloidal, hot injection thermolysis, monodisperse, reaction time, single source precursor, quaternary ferrite nanoparticles
Procedia PDF Downloads 31513056 The Construction Technology of Dryer Silo Materials to Grains Made from Webbing Bamboo: A Drying Technology Solutions to Empowerment Farmers in Yogyakarta, Indonesia
Authors: Nursigit Bintoro, Abadi Barus, Catur Setyo Dedi Pamungkas
Abstract:
Indonesia is an agrarian country have almost population work as farmers. One of the popular agriculture commodity in Indonesia is paddy and corn. Production of paddy and corn are increased, but not balanced to the development of appropriate technology to farmers. Methods of drying applied with farmers still using sunshine. Drying by this method has some drawbacks, such as differences moisture content of corn grains, time used to dry around 3 days, and less quality of the products obtained. Beside it, the method of drying by using sunshine can’t do when the rainy season arrives. On this season the product obtained has less quality. One solution to the above problems is to create a dryer with simple technology. That technology is made silo dryer from webbing bamboo and wood. This technology is applicable to be applied to farmers' groups as well as the creation technology is quite cheap. The experiment material used in this research will be obtained from the corn grains. The equipment used are woven bamboo with a height of 3 meters and have capacity of up to 900 kgs as a silo, gas, burner, blower, bucket elevators, thermocouple, Arduino microcontroller 2560. This tools automatically records all the data of temperature and relative humidity. During on drying, each 30 minutes take 9 sample for measuring moisture content with moisture meter. By using this technology, farmers can save time, energy, and cost to the drying their agriculture product. In addition, by using this technology have good quality moisture content of grains and have a longer shelf life because the temperature when the heating process is controlled. Therefore, this technology is applicable to be applied to the public because the materials used to make the dryer easier to find, cheaper, and manufacture of the dryer made simple with good quality.Keywords: grains, dryer, moisture content, appropriate technology
Procedia PDF Downloads 35813055 Dairy Wastewater Treatment by Electrochemical and Catalytic Method
Authors: Basanti Ekka, Talis Juhna
Abstract:
Dairy industrial effluents originated by the typical processing activities are composed of various organic and inorganic constituents, and these include proteins, fats, inorganic salts, antibiotics, detergents, sanitizers, pathogenic viruses, bacteria, etc. These contaminants are harmful to not only human beings but also aquatic flora and fauna. Because consisting of large classes of contaminants, the specific targeted removal methods available in the literature are not viable solutions on the industrial scale. Therefore, in this on-going research, a series of coagulation, electrochemical, and catalytic methods will be employed. The bulk coagulation and electrochemical methods can wash off most of the contaminants, but some of the harmful chemicals may slip in; therefore, specific catalysts designed and synthesized will be employed for the removal of targeted chemicals. In the context of Latvian dairy industries, presently, work is under progress on the characterization of dairy effluents by total organic carbon (TOC), Inductively Coupled Plasma Mass Spectrometry (ICP-MS)/ Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES), High-Performance Liquid Chromatography (HPLC), Gas Chromatography-Mass Spectrometry (GC-MS), and Mass Spectrometry. After careful evaluation of the dairy effluents, a cost-effective natural coagulant will be employed prior to advanced electrochemical technology such as electrocoagulation and electro-oxidation as a secondary treatment process. Finally, graphene oxide (GO) based hybrid materials will be used for post-treatment of dairy wastewater as graphene oxide has been widely applied in various fields such as environmental remediation and energy production due to the presence of various oxygen-containing groups. Modified GO will be used as a catalyst for the removal of remaining contaminants after the electrochemical process.Keywords: catalysis, dairy wastewater, electrochemical method, graphene oxide
Procedia PDF Downloads 14413054 Data Mining Spatial: Unsupervised Classification of Geographic Data
Authors: Chahrazed Zouaoui
Abstract:
In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.Keywords: mining, GIS, geo-clustering, neighborhood
Procedia PDF Downloads 37513053 Growth and Characterization of Cuprous Oxide (Cu2O) Nanorods by Reactive Ion Beam Sputter Deposition (Ibsd) Method
Authors: Assamen Ayalew Ejigu, Liang-Chiun Chao
Abstract:
In recent semiconductor and nanotechnology, quality material synthesis, proper characterizations, and productions are the big challenges. As cuprous oxide (Cu2O) is a promising semiconductor material for photovoltaic (PV) and other optoelectronic applications, this study was aimed at to grow and characterize high quality Cu2O nanorods for the improvement of the efficiencies of thin film solar cells and other potential applications. In this study, well-structured cuprous oxide (Cu2O) nanorods were successfully fabricated using IBSD method in which the Cu2O samples were grown on silicon substrates with a substrate temperature of 400°C in an IBSD chamber of pressure of 4.5 x 10-5 torr using copper as a target material. Argon, and oxygen gases were used as a sputter and reactive gases, respectively. The characterization of the Cu2O nanorods (NRs) were done in comparison with Cu2O thin film (TF) deposited with the same method but with different Ar:O2 flow rates. With Ar:O2 ratio of 9:1 single phase pure polycrystalline Cu2O NRs with diameter of ~500 nm and length of ~4.5 µm were grow. Increasing the oxygen flow rates, pure single phase polycrystalline Cu2O thin film (TF) was found at Ar:O2 ratio of 6:1. The field emission electron microscope (FE-SEM) measurements showed that both samples have smooth morphologies. X-ray diffraction and Rama scattering measurements reveals the presence of single phase Cu2O in both samples. The differences in Raman scattering and photoluminescence (PL) bands of the two samples were also investigated and the results showed us there are differences in intensities, in number of bands and in band positions. Raman characterization shows that the Cu2O NRs sample has pronounced Raman band intensities, higher numbers of Raman bands than the Cu2O TF which has only one second overtone Raman signal at 2 (217 cm-1). The temperature dependent photoluminescence (PL) spectra measurements, showed that the defect luminescent band centered at 720 nm (1.72 eV) is the dominant one for the Cu2O NRs and the 640 nm (1.937 eV) band was the only PL band observed from the Cu2O TF. The difference in optical and structural properties of the samples comes from the oxygen flow rate change in the process window of the samples deposition. This gave us a roadmap for further investigation of the electrical and other optical properties for the tunable fabrication of the Cu2O nano/micro structured sample for the improvement of the efficiencies of thin film solar cells in addition to other potential applications. Finally, the novel morphologies, excellent structural and optical properties seen exhibits the grown Cu2O NRs sample has enough quality to be used in further research of the nano/micro structured semiconductor materials.Keywords: defect levels, nanorods, photoluminescence, Raman modes
Procedia PDF Downloads 24113052 Navigating the Case-Based Learning Multimodal Learning Environment: A Qualitative Study Across the First-Year Medical Students
Authors: Bhavani Veasuvalingam
Abstract:
Case-based learning (CBL) is a popular instructional method aimed to bridge theory to clinical practice. This study aims to explore CBL mixed modality curriculum in influencing students’ learning styles and strategies that support learning. An explanatory sequential mixed method study was employed with initial phase, 44-itemed Felderman’s Index of Learning Style (ILS) questionnaire employed across year one medical students (n=142) using convenience sampling to describe the preferred learning styles. The qualitative phase utilised three focus group discussions (FGD) to explore in depth on the multimodal learning style exhibited by the students. Most students preferred combination of learning stylesthat is reflective, sensing, visual and sequential i.e.: RSVISeq style (24.64%) from the ILS analysis. The frequency of learning preference from processing to understanding were well balanced, with sequential-global domain (66.2%); sensing-intuitive (59.86%), active- reflective (57%), and visual-verbal (51.41%). The qualitative data reported three major themes, namely Theme 1: CBL mixed modalities navigates learners’ learning style; Theme 2: Multimodal learners active learning strategies supports learning. Theme 3: CBL modalities facilitating theory into clinical knowledge. Both quantitative and qualitative study strongly reports the multimodal learning style of the year one medical students. Medical students utilise multimodal learning styles to attain the clinical knowledge when learning with CBL mixed modalities. Educators’ awareness of the multimodal learning style is crucial in delivering the CBL mixed modalities effectively, considering strategic pedagogical support students to engage and learn CBL in bridging the theoretical knowledge into clinical practice.Keywords: case-based learning, learnign style, medical students, learning
Procedia PDF Downloads 9513051 A Framework for Teaching the Intracranial Pressure Measurement through an Experimental Model
Authors: Christina Klippel, Lucia Pezzi, Silvio Neto, Rafael Bertani, Priscila Mendes, Flavio Machado, Aline Szeliga, Maria Cosendey, Adilson Mariz, Raquel Santos, Lys Bendett, Pedro Velasco, Thalita Rolleigh, Bruna Bellote, Daria Coelho, Bruna Martins, Julia Almeida, Juliana Cerqueira
Abstract:
This project presents a framework for teaching intracranial pressure monitoring (ICP) concepts using a low-cost experimental model in a neurointensive care education program. Data concerning ICP monitoring contribute to the patient's clinical assessment and may dictate the course of action of a health team (nursing, medical staff) and influence decisions to determine the appropriate intervention. This study aims to present a safe method for teaching ICP monitoring to medical students in a Simulation Center. Methodology: Medical school teachers, along with students from the 4th year, built an experimental model for teaching ICP measurement. The model consists of a mannequin's head with a plastic bag inside simulating the cerebral ventricle and an inserted ventricular catheter connected to the ICP monitoring system. The bag simulating the ventricle can also be changed for others containing bloody or infected simulated cerebrospinal fluid. On the mannequin's ear, there is a blue point indicating the right place to set the "zero point" for accurate pressure reading. The educational program includes four steps: 1st - Students receive a script on ICP measurement for reading before training; 2nd - Students watch a video about the subject created in the Simulation Center demonstrating each step of the ICP monitoring and the proper care, such as: correct positioning of the patient, anatomical structures to establish the zero point for ICP measurement and a secure range of ICP; 3rd - Students train the procedure in the model. Teachers help students during training; 4th - Student assessment based on a checklist form. Feedback and correction of wrong actions. Results: Students expressed interest in learning ICP monitoring. Tests concerning the hit rate are still being performed. ICP's final results and video will be shown at the event. Conclusion: The study of intracranial pressure measurement based on an experimental model consists of an effective and controlled method of learning and research, more appropriate for teaching neurointensive care practices. Assessment based on a checklist form helps teachers keep track of student learning progress. This project offers medical students a safe method to develop intensive neurological monitoring skills for clinical assessment of patients with neurological disorders.Keywords: neurology, intracranial pressure, medical education, simulation
Procedia PDF Downloads 17213050 Acetalization of Carbonyl Compounds by Using Al2 (HPO4)3 under Green Condition Mg HPO4
Authors: Fariba Jafari, Samaneh Heydarian
Abstract:
Al2(HPO4)3 was easily prepared and used as a solid acid in acetalization of carbonyl compounds at room temperature and under solvent-free conditions. The protection was done in short reaction times and in good to high isolated yields. The cheapness and availability of this reagent with easy procedure and work-up make this method attractive for the organic synthesis.Keywords: acetalization, acid catalysis, carbonylcompounds, green condition, protection
Procedia PDF Downloads 31713049 Spirituality Enhanced with Cognitive-Behavioural Techniques: An Effective Method for Women with Extramarital Infidelity: A Literature Review
Authors: Setareh Yousife
Abstract:
Introduction: Studies suggest that Extramarital Infidelity (EMI) variants, such as sexual and emotional infidelities are increasing in marriage relationships. To our knowledge, less is known about what therapies and mental-hygiene factors can prevent more effective this behavior and address it. Spiritual and cognitive-behavioural health have proven to reduce marital conflict, Increase marital satisfaction and commitment. Objective: This study aims to discuss the effectiveness of spiritual counseling combined with Cognitive-behavioural techniques in addressing Extramarital Infidelity. Method: Descriptive, analytical, and intervention articles indexed in SID, Noormags, Scopus, Iranmedex, Web of Science and PubMed databases, and Google Scholar were searched. We focused on Studies in which Women with extramarital relationships, including heterosexual married couples-only studies and spirituality/religion and CBT as coping techniques used as EMI therapy. Finally, the full text of all eligible articles was prepared and discussed in this review. Results: 25 publications were identified, and their textual analysis facilitated through four thematic approaches: The nature of EMI in Women, the meaning of spirituality in the context of mental health and human behavior as well as psychotherapy; Spirituality integrated into Cognitive-Behavioral approach, The role of Spirituality as a deterrent to EMI. Conclusions: The integration of the findings discussed herein suggests that the application of cognitive and behavioral skills in addressing these kinds of destructive family-based relationships is inevitable. As treatments based on religion/spirituality or cognition/behavior do not seem adequately effective in dealing with EMI, the combination of these approaches may lead to higher efficacy in fewer sessions and a shorter time.Keywords: spirituality, religion, cognitive behavioral therapy, extramarital relation, infidelity
Procedia PDF Downloads 25413048 Quantitative Evaluation of Mitral Regurgitation by Using Color Doppler Ultrasound
Authors: Shang-Yu Chiang, Yu-Shan Tsai, Shih-Hsien Sung, Chung-Ming Lo
Abstract:
Mitral regurgitation (MR) is a heart disorder which the mitral valve does not close properly when the heart pumps out blood. MR is the most common form of valvular heart disease in the adult population. The diagnostic echocardiographic finding of MR is straightforward due to the well-known clinical evidence. In the determination of MR severity, quantification of sonographic findings would be useful for clinical decision making. Clinically, the vena contracta is a standard for MR evaluation. Vena contracta is the point in a blood stream where the diameter of the stream is the least, and the velocity is the maximum. The quantification of vena contracta, i.e. the vena contracta width (VCW) at mitral valve, can be a numeric measurement for severity assessment. However, manually delineating the VCW may not accurate enough. The result highly depends on the operator experience. Therefore, this study proposed an automatic method to quantify VCW to evaluate MR severity. Based on color Doppler ultrasound, VCW can be observed from the blood flows to the probe as the appearance of red or yellow area. The corresponding brightness represents the value of the flow rate. In the experiment, colors were firstly transformed into HSV (hue, saturation and value) to be closely align with the way human vision perceives red and yellow. Using ellipse to fit the high flow rate area in left atrium, the angle between the mitral valve and the ultrasound probe was calculated to get the vertical shortest diameter as the VCW. Taking the manual measurement as the standard, the method achieved only 0.02 (0.38 vs. 0.36) to 0.03 (0.42 vs. 0.45) cm differences. The result showed that the proposed automatic VCW extraction can be efficient and accurate for clinical use. The process also has the potential to reduce intra- or inter-observer variability at measuring subtle distances.Keywords: mitral regurgitation, vena contracta, color doppler, image processing
Procedia PDF Downloads 370