Search results for: multiple conditions diagnosis
3440 Biocontrol of Fusarium Crown and Root Rot and Enhancement of Tomato Solanum lycopersicum L. Growth Using Solanum linnaeanum L. Extracts
Authors: Ahlem Nefzi, Rania Aydi Ben Abdallah, Hayfa Jabnoun-Khiareddine, Nawaim Ammar, Sined Medimagh-Saidana, Mejda Daami-Remadi
Abstract:
In the present study, leaf, stem, and fruit aqueous extracts of native wild Solanum linnaeanum L. were screened for their ability to suppress Fusarium Crown and Root Rot disease and to enhance tomato (Solanum lycopersicum L.) growth under greenhouse conditions. Leaf extract used at 30% w/v was the most effective in reducing leaf and root damage index by 92.3% and the extent of vascular discoloration by 97.56% compared to Fusarium oxyxporum f. sp radicis lycopersici -inoculated and untreated control. A significant promotion of growth parameters (root length, shoot height, root and shoot biomass and stem diameter) was recorded on tomato cv. Rio Grande seedlings by 40.3-94.1% as compared to FORL inoculated control and by 9.6-88.8% over pathogen-free control. All S. linnaeanum aqueous extracts tested significantly stimulated the germination by 10.2 to 80.1% relative to the untreated control. FORL mycelial growth, assessed using the poisoned food technique, varied depending on plant organs, extracts, and concentrations used. Butanolic extracts were the most active, leading to 60.81% decrease in FORL mycelial growth. HPLC analysis of butanolic extract revealed the presence of thirteen phenolic compounds. Thus, S. linnaeanum can be explored as a potential natural source of antifungal and biofertilizing compounds.Keywords: antifungal activity, HPLC-MS analysis, Fusarium oxysporum f. sp. radicis-lycopersici, tomato growth
Procedia PDF Downloads 1603439 Study of the Effect of the Contra-Rotating Component on the Performance of the Centrifugal Compressor
Authors: Van Thang Nguyen, Amelie Danlos, Richard Paridaens, Farid Bakir
Abstract:
This article presents a study of the effect of a contra-rotating component on the efficiency of centrifugal compressors. A contra-rotating centrifugal compressor (CRCC) is constructed using two independent rotors, rotating in the opposite direction and replacing the single rotor of a conventional centrifugal compressor (REF). To respect the geometrical parameters of the REF one, two rotors of the CRCC are designed, based on a single rotor geometry, using the hub and shroud length ratio parameter of the meridional contour. Firstly, the first rotor is designed by choosing a value of length ratio. Then, the second rotor is calculated to be adapted to the fluid flow of the first rotor according aerodynamics principles. In this study, four values of length ratios 0.3, 0.4, 0.5, and 0.6 are used to create four configurations CF1, CF2, CF3, and CF4 respectively. For comparison purpose, the circumferential velocity at the outlet of the REF and the CRCC are preserved, which means that the single rotor of the REF and the second rotor of the CRCC rotate with the same speed of 16000rpm. The speed of the first rotor in this case is chosen to be equal to the speed of the second rotor. The CFD simulation is conducted to compare the performance of the CRCC and the REF with the same boundary conditions. The results show that the configuration with a higher length ratio gives higher pressure rise. However, its efficiency is lower. An investigation over the entire operating range shows that the CF1 is the best configuration in this case. In addition, the CRCC can improve the pressure rise as well as the efficiency by changing the speed of each rotor independently. The results of changing the first rotor speed show with a 130% speed increase, the pressure ratio rises of 8.7% while the efficiency remains stable at the flow rate of the design operating point.Keywords: centrifugal compressor, contra-rotating, interaction rotor, vacuum
Procedia PDF Downloads 1343438 Fabrication of Drug-Loaded Halloysite Nanotubes Containing Sodium Alginate/Gelatin Composite Scaffolds
Authors: Masoumeh Haghbin Nazarpak, Hamidreza Tolabi, Aryan Ekhlasi
Abstract:
Bone defects are mentioned as one of the most challenging clinical conditions, affecting millions of people each year. A fracture, osteoporosis, tumor, or infection usually causes these defects. At present, autologous and allogeneic grafts are used to correct bone defects, but these grafts have some difficulties, such as limited access, infection, disease transmission, and immune rejection. Bone tissue engineering is considered a new strategy for repairing bone defects. However, problems with scaffolds’ design with unique structures limit their clinical applications. In addition, numerous in-vitro studies have been performed on the behavior of bone cells in two-dimensional environments. Still, cells grow in physiological situations in the human body in a three-dimensional environment. As a result, the controlled design of porous structures with high structural complexity and providing the necessary flexibility to meet specific needs in bone tissue repair is beneficial. For this purpose, a three-dimensional composite scaffold based on gelatin and sodium alginate hydrogels is used in this research. In addition, the antibacterial drug-loaded halloysite nanotubes were introduced into the hydrogel scaffold structure to provide a suitable substrate for controlled drug release. The presence of halloysite nanotubes improved hydrogel’s properties, while the drug eliminated infection and disease transmission. Finally, it can be acknowledged that the composite scaffold prepared in this study for bone tissue engineering seems promising.Keywords: halloysite nanotubes, bone tissue engineering, composite scaffold, controlled drug release
Procedia PDF Downloads 743437 X-Ray Diffraction, Microstructure, and Mössbauer Studies of Nanostructured Materials Obtained by High-Energy Ball Milling
Authors: N. Boudinar, A. Djekoun, A. Otmani, B. Bouzabata, J. M. Greneche
Abstract:
High-energy ball milling is a solid-state powder processing technique that allows synthesizing a variety of equilibrium and non-equilibrium alloy phases starting from elemental powders. The advantage of this process technology is that the powder can be produced in large quantities and the processing parameters can be easily controlled, thus it is a suitable method for commercial applications. It can also be used to produce amorphous and nanocrystalline materials in commercially relevant amounts and is also amenable to the production of a variety of alloy compositions. Mechanical alloying (high-energy ball milling) provides an inter-dispersion of elements through a repeated cold welding and fracture of free powder particles; the grain size decreases to nano metric scale and the element mix together. Progressively, the concentration gradients disappear and eventually the elements are mixed at the atomic scale. The end products depend on many parameters such as the milling conditions and the thermodynamic properties of the milled system. Here, the mechanical alloying technique has been used to prepare nano crystalline Fe_50 and Fe_64 wt.% Ni alloys from powder mixtures. Scanning electron microscopy (SEM) with energy-dispersive, X-ray analyses and Mössbauer spectroscopy were used to study the mixing at nanometric scale. The Mössbauer Spectroscopy confirmed the ferromagnetic ordering and was use to calculate the distribution of hyperfin field. The Mössbauer spectrum for both alloys shows the existence of a ferromagnetic phase attributed to γ-Fe-Ni solid solution.Keywords: nanocrystalline, mechanical alloying, X-ray diffraction, Mössbauer spectroscopy, phase transformations
Procedia PDF Downloads 4373436 A Review of How COVID-19 Has Created an Insider Fraud Pandemic and How to Stop It
Authors: Claire Norman-Maillet
Abstract:
Insider fraud, including its various synonyms such as occupational, employee or internal fraud, is a major financial crime threat whereby an employee defrauds (or attempts to defraud) their current, prospective, or past employer. ‘Employee’ covers anyone employed by the company, including contractors, directors, and part time staff; they may be a solo bad actor or working in collusion with others, whether internal or external. Insider fraud is even more of a concern given the impacts of the Coronavirus pandemic, which has generated multiple opportunities to commit insider fraud. Insider fraud is something that is not necessarily thought of as a significant financial crime threat; the focus of most academics and practitioners has historically been on that of ‘external fraud’ against businesses or entities where an individual or group has no professional ties. Without the face-to-face, ‘over the shoulder’ capabilities of staff being able to keep an eye on their employees, there is a heightened reliance on trust and transparency. With this, naturally, comes an increased risk of insider fraud perpetration. The objective of the research is to better understand how companies are impacted by insider fraud, and therefore how to stop it. This research will make both an original contribution and stimulate debate within the financial crime field. The financial crime landscape is never static – criminals are always creating new ways to perpetrate financial crime, and new legislation and regulations are implemented as attempts to strengthen controls, in addition to businesses doing what they can internally to detect and prevent it. By focusing on insider fraud specifically, the research will be more specific and will be of greater use to those in the field. To achieve the aims of the research, semi-structured interviews were conducted with 22 individuals who either work in financial services and deal with insider fraud or work within insider fraud perpetration in a recruitment or advisory capacity. This was to enable the sourcing of information from a wide range of individuals in a setting where they were able to elaborate on their answers. The principal recruitment strategy was engaging with the researcher’s network on LinkedIn. The interviews were then transcribed and analysed thematically. Main findings in the research suggest that insider fraud has been ignored owing to the denial of accepting the possibility that colleagues would defraud their employer. Whilst Coronavirus has led to a significant rise in insider fraud, this type of crime has been a major risk to businesses since their inception, however have never been given the financial or strategic backing required to be mitigated, until it's too late. Furthermore, Coronavirus should have led to companies tightening their access rights, controls and policies to mitigate the insider fraud risk. However, in most cases this has not happened. The research concludes that insider fraud needs to be given a platform upon which to be recognised as a threat to any company and given the same level of weighting and attention by Executive Committees and Boards as other types of economic crime.Keywords: fraud, insider fraud, economic crime, coronavirus, Covid-19
Procedia PDF Downloads 683435 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning
Authors: Shayan Mohajer Hamidi
Abstract:
Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning
Procedia PDF Downloads 753434 Aseismic Stiffening of Architectural Buildings as Preventive Restoration Using Unconventional Materials
Authors: Jefto Terzovic, Ana Kontic, Isidora Ilic
Abstract:
In the proposed design concept, laminated glass and laminated plexiglass, as ”unconventional materials”, are considered as a filling in a steel frame on which they overlap by the intermediate rubber layer, thereby forming a composite assembly. In this way vertical elements of stiffening are formed, capable for reception of seismic force and integrated into the structural system of the building. The applicability of such a system was verified by experiments in laboratory conditions where the experimental models based on laminated glass and laminated plexiglass had been exposed to the cyclic loads that simulate the seismic force. In this way the load capacity of composite assemblies was tested for the effects of dynamic load that was parallel to assembly plane. Thus, the stress intensity to which composite systems might be exposed was determined as well as the range of the structure stiffening referring to the expressed deformation along with the advantages of a particular type of filling compared to the other one. Using specialized software whose operation is based on the finite element method, a computer model of the structure was created and processed in the case study; the same computer model was used for analyzing the problem in the first phase of the design process. The stiffening system based on composite assemblies tested in laboratories is implemented in the computer model. The results of the modal analysis and seismic calculation from the computer model with stiffeners applied showed an efficacy of such a solution, thus rounding the design procedures for aseismic stiffening by using unconventional materials.Keywords: laminated glass, laminated plexiglass, aseismic stiffening, experiment, laboratory testing, computer model, finite element method
Procedia PDF Downloads 783433 Combustion Improvements by C4/C5 Bio-Alcohol Isomer Blended Fuels Combined with Supercharging and EGR in a Diesel Engine
Authors: Yasufumi Yoshimoto, Enkhjargal Tserenochir, Eiji Kinoshita, Takeshi Otaka
Abstract:
Next generation bio-alcohols produced from non-food based sources like cellulosic biomass are promising renewable energy sources. The present study investigates engine performance, combustion characteristics, and emissions of a small single cylinder direct injection diesel engine fueled by four kinds of next generation bio-alcohol isomer and diesel fuel blends with a constant blending ratio of 3:7 (mass). The tested bio-alcohol isomers here are n-butanol and iso-butanol (C4 alcohol), and n-pentanol and iso-pentanol (C5 alcohol). To obtain simultaneous reductions in NOx and smoke emissions, the experiments employed supercharging combined with EGR (Exhaust Gas Recirculation). The boost pressures were fixed at two conditions, 100 kPa (naturally aspirated operation) and 120 kPa (supercharged operation) provided with a roots blower type supercharger. The EGR rates were varied from 0 to 25% using a cooled EGR technique. The results showed that both with and without supercharging, all the bio-alcohol blended diesel fuels improved the trade-off relation between NOx and smoke emissions at all EGR rates while maintaining good engine performance, when compared with diesel fuel operation. It was also found that regardless of boost pressure and EGR rate, the ignition delays of the tested bio-alcohol isomer blends are in the order of iso-butanol > n-butanol > iso-pentanol > n-pentanol. Overall, it was concluded that, except for the changes in the ignition delays the influence of bio-alcohol isomer blends on the engine performance, combustion characteristics, and emissions are relatively small.Keywords: alternative fuel, butanol, diesel engine, EGR (Exhaust Gas Recirculation), next generation bio-alcohol isomer blended fuel, pentanol, supercharging
Procedia PDF Downloads 1693432 Perceived Structural Empowerment and Work Commitment among Intensive Care nurses in SMC
Authors: Ridha Abdulla Al Hammam
Abstract:
Purpose: to measure the extent of perceived structural empowerment and work commitment the intensive care unit in SMC have in their work place. Background: nurses’ access to power structures (information, recourses, opportunity, and support) directly influences their productivity, retention, and job satisfaction. Exploring nurses’ level and sources of work commitment (affective, normative, and continuance) is very essential to guide nursing leaders making decisions to improve work environment to facilitate effective nursing care. Both concepts (Structural Empowerment and Work Commitment) were never investigated in our critical care unit. Methods: a sample of 50 nurses attained from the Intensive Care Unit (Adult). Conditions for Workplace Effectiveness Questionnaire and Three-Component Model Employee Commitment Survey were used to measure the two concepts respectively. The study is quantitative, descriptive, and correlational in design. Results: the participants reported moderate structural empowerment provided by their work place (M=15 out of 20). The sample perceived high access to opportunity mainly through gaining more skills (M=4.45 out of 5) where the rest power structures were perceived with moderate accessibility. The participants’ affective commitment (M=5.6 out of 7) to work in the ICU overweighed their normative and continuance commitment (M=5.1, M=4.9 out of 7) implying a stronger emotional connection with their unit. Strong positive and significant correlations were observed between the participants’ structural empowerment scores and all work commitment sources. Conclusion: these results provided an insight on aspects of work environment that need to be fostered and improved in our intensive care unit which have a direct linkage to nurses’ work commitment and potentially to their quality of care they provide.Keywords: structural empowerment, commitment, intensive care, nurses
Procedia PDF Downloads 2873431 Numerical Simulation of Flow and Heat Transfer Characteristics with Various Working Conditions inside a Reactor of Wet Scrubber
Authors: Jonghyuk Yoon, Hyoungwoon Song, Youngbae Kim, Eunju Kim
Abstract:
Recently, with the rapid growth of semiconductor industry, lots of interests have been focused on after treatment system that remove the polluted gas produced from semiconductor manufacturing process, and a wet scrubber is the one of the widely used system. When it comes to mechanism of removing the gas, the polluted gas is removed firstly by chemical reaction in a reactor part. After that, the polluted gas stream is brought into contact with the scrubbing liquid, by spraying it with the liquid. Effective design of the reactor part inside the wet scrubber is highly important since removal performance of the polluted gas in the reactor plays an important role in overall performance and stability. In the present study, a CFD (Computational Fluid Dynamics) analysis was performed to figure out the thermal and flow characteristics inside unit a reactor of wet scrubber. In order to verify the numerical result, temperature distribution of the numerical result at various monitoring points was compared to the experimental result. The average error rates (12~15%) between them was shown and the numerical result of temperature distribution was in good agreement with the experimental data. By using validated numerical method, the effect of the reactor geometry on heat transfer rate was also taken into consideration. Uniformity of temperature distribution was improved about 15%. Overall, the result of present study could be useful information to identify the fluid behavior and thermal performance for various scrubber systems. This project is supported by the ‘R&D Center for the reduction of Non-CO₂ Greenhouse gases (RE201706054)’ funded by the Korea Ministry of Environment (MOE) as the Global Top Environment R&D Program.Keywords: semiconductor, polluted gas, CFD (Computational Fluid Dynamics), wet scrubber, reactor
Procedia PDF Downloads 1443430 Evaluation of Two Functional Food Products: Tortillas and Yogurt Based on Spirulina platensis and Haematococcus pluvialis
Authors: Raul Alexis Sanchez Cornejo, Elena Ivonne Mancera Andrade, Gibran Sidney Aleman Nava, Angel Josue Arteaga Garces, Roberto Parra Saldivar
Abstract:
An unhealthy diet is one of the main factors for a wide range of chronical diseases such as diabetes, obesity, cancer, cardiovascular diseases, among others. Nowadays, there is a current need to provide innovate healthy products to people in order to decrease the number of people with unhealthy diet. This study focuses on the production of two food products based on two microalgae strains: Tortillas with powder of Haematococcus pluvialis and Spirulina platensis biomass and yogurt with microencapsulated biomass of the same strains. S. platensis has been used widely as food supplements in a form of powder and pills due to its high content in proteins and fatty acids. Haematococcus pluvialis has been recognized for its ability to produce high-added value products under stressful conditions such as antioxidants (astaxanthin). Despite the benefits that those microalgae have, few efforts have been done to use them in food products. The main objective of this work is to evaluate the nutritional properties such as protein content, lipid fraction, carbohydrates, antioxidants,, and vitamins, that these microalgae strains provide to the food product. Additionally, physicochemical, and sensory evaluation were assessed to evaluate the quality of the product. The results obtained will dictate the feasibility of the product to be commercialized. These novel products will have the ability to change the nutritional intake and strength the health of the consumers.Keywords: functional food, Haematococcus pluvialis, microalgae, Spirulina platensis, tortilla, yogurt
Procedia PDF Downloads 3133429 Freight Forwarders’ Liability: A Need for Revival of Unidroit Draft Convention after Six Decades
Authors: Mojtaba Eshraghi Arani
Abstract:
The freight forwarders, who are known as the Architect of Transportation, play a vital role in the supply chain management. The package of various services which they provide has made the legal nature of freight forwarders very controversial, so that they might be qualified once as principal or carrier and, on other occasions, as agent of the shipper as the case may be. They could even be involved in the transportation process as the agent of shipping line, which makes the situation much more complicated. The courts in all countries have long had trouble in distinguishing the “forwarder as agent” from “forwarder as principal” (as it is outstanding in the prominent case of “Vastfame Camera Ltd v Birkart Globistics Ltd And Others” 2005, Hong Kong). It is not fully known that in the case of a claim against the forwarder, what particular parameter would be used by the judge among multiple, and sometimes contradictory, tests for determining the scope of the forwarder liability. In particular, every country has its own legal parameters for qualifying the freight forwarders that is completely different from others, as it is the case in France in comparison with Germany and England. The unpredictability of the courts’ decisions in this regard has provided the freight forwarders with the opportunity to impose any limitation or exception of liability while pretending to play the role of a principal, consequently making the cargo interests incur ever-increasing damage. The transportation industry needs to remove such uncertainty by unifying national laws governing freight forwarders liability. A long time ago, in 1967, The International Institute for Unification of Private Law (UNIDROIT) prepared a draft convention called “Draft Convention on Contract of Agency for Forwarding Agents Relating to International Carriage of Goods” (hereinafter called “UNIDROIT draft convention”). The UNIDROIT draft convention provided a clear and certain framework for the liability of freight forwarder in each capacity as agent or carrier, but it failed to transform to a convention, and eventually, it was consigned to oblivion. Today, after nearly 6 decades from that era, the necessity of such convention can be felt apparently. However, one might reason that the same grounds, in particular, the resistance by forwarders’ association, FIATA, exist yet, and thus it is not logical to revive a forgotten draft convention after such long period of time. It is argued in this article that the main reason for resisting the UNIDROIT draft convention in the past was pending efforts for developing the “1980 United Nation Convention on International Multimodal Transport of Goods”. However, the latter convention failed to become in force on due time in a way that there was no new accession since 1996, as a result of which the UNIDROIT draft convention must be revived strongly and immediately submitted to the relevant diplomatic conference. A qualitative method with the concept of interpretation of data collection has been used in this manuscript. The source of the data is the analysis of international conventions and cases.Keywords: freight forwarder, revival, agent, principal, uidroit, draft convention
Procedia PDF Downloads 743428 Cloud Based Supply Chain Traceability
Authors: Kedar J. Mahadeshwar
Abstract:
Concept introduction: This paper talks about how an innovative cloud based analytics enabled solution that could address a major industry challenge that is approaching all of us globally faster than what one would think. The world of supply chain for drugs and devices is changing today at a rapid speed. In the US, the Drug Supply Chain Security Act (DSCSA) is a new law for Tracing, Verification and Serialization phasing in starting Jan 1, 2015 for manufacturers, repackagers, wholesalers and pharmacies / clinics. Similarly we are seeing pressures building up in Europe, China and many countries that would require an absolute traceability of every drug and device end to end. Companies (both manufacturers and distributors) can use this opportunity not only to be compliant but to differentiate themselves over competition. And moreover a country such as UAE can be the leader in coming up with a global solution that brings innovation in this industry. Problem definition and timing: The problem of counterfeit drug market, recognized by FDA, causes billions of dollars loss every year. Even in UAE, the concerns over prevalence of counterfeit drugs, which enter through ports such as Dubai remains a big concern, as per UAE pharma and healthcare report, Q1 2015. Distribution of drugs and devices involves multiple processes and systems that do not talk to each other. Consumer confidence is at risk due to this lack of traceability and any leading provider is at risk of losing its reputation. Globally there is an increasing pressure by government and regulatory bodies to trace serial numbers and lot numbers of every drug and medical devices throughout a supply chain. Though many of large corporations use some form of ERP (enterprise resource planning) software, it is far from having a capability to trace a lot and serial number beyond the enterprise and making this information easily available real time. Solution: The solution here talks about a service provider that allows all subscribers to take advantage of this service. The solution allows a service provider regardless of its physical location, to host this cloud based traceability and analytics solution of millions of distribution transactions that capture lots of each drug and device. The solution platform will capture a movement of every medical device and drug end to end from its manufacturer to a hospital or a doctor through a series of distributor or retail network. The platform also provides advanced analytics solution to do some intelligent reporting online. Why Dubai? Opportunity exists with huge investment done in Dubai healthcare city also with using technology and infrastructure to attract more FDI to provide such a service. UAE and countries similar will be facing this pressure from regulators globally in near future. But more interestingly, Dubai can attract such innovators/companies to run and host such a cloud based solution and become a hub of such traceability globally.Keywords: cloud, pharmaceutical, supply chain, tracking
Procedia PDF Downloads 5273427 Evaluating the Effectiveness of Combined Psychiatric and Psychotherapeutic Care versus Psychotherapy Alone in the Treatment of Depression and Anxiety in Cancer Patients
Authors: Nathen A. Spitz, Dennis Martin Kivlighan III, Arwa Aburizik
Abstract:
Background and Purpose: Presently, there is a paucity of naturalistic studies that directly compare the effectiveness of psychotherapy versus concurrent psychotherapy and psychiatric care for the treatment of depression and anxiety in cancer patients. Informed by previous clinical trials examining the efficacy of concurrent approaches, this study sought to test the hypothesis that a combined approach would result in the greatest reduction of depression and anxiety symptoms. Methods: Data for this study consisted of 433 adult cancer patients, with 252 receiving only psychotherapy and 181 receiving concurrent psychotherapy and psychiatric care at the University of Iowa Hospitals and Clinics. Longitudinal PHQ9 and GAD7 data were analyzed between both groups using latent growth curve analyses. Results: After controlling for treatment length and provider effects, results indicated that concurrent care was more effective than psychotherapy alone for depressive symptoms (γ₁₂ = -0.12, p = .037). Specifically, the simple slope for concurrent care was -0.25 (p = .022), and the simple slope for psychotherapy alone was -0.13 (p = .006), suggesting that patients receiving concurrent care experienced a greater reduction in depressive symptoms compared to patients receiving psychotherapy alone. In contrast, there were no significant differences between psychotherapy alone and concurrent psychotherapy and psychiatric care in the reduction of anxious symptoms. Conclusions: Overall, as both psychotherapy and psychiatric care may address unique aspects of mental health conditions, in addition to potentially providing synergetic support to each other, a combinatorial approach to mental healthcare for cancer patients may improve outcomes.Keywords: psychiatry, psychology, psycho-oncology, combined care, psychotherapy, behavioral psychology
Procedia PDF Downloads 1183426 Evaluating the Use of Manned and Unmanned Aerial Vehicles in Strategic Offensive Tasks
Authors: Yildiray Korkmaz, Mehmet Aksoy
Abstract:
In today's operations, countries want to reach their aims in the shortest way due to economical, political and humanitarian aspects. The most effective way of achieving this goal is to be able to penetrate strategic targets. Strategic targets are generally located deep inside of the countries and are defended by modern and efficient surface to air missiles (SAM) platforms which are operated as integrated with Intelligence, Surveillance and Reconnaissance (ISR) systems. On the other hand, these high valued targets are buried deep underground and hardened with strong materials against attacks. Therefore, to penetrate these targets requires very detailed intelligence. This intelligence process should include a wide range that is from weaponry to threat assessment. Accordingly, the framework of the attack package will be determined. This mission package has to execute missions in a high threat environment. The way to minimize the risk which depends on loss of life is to use packages which are formed by UAVs. However, some limitations arising from the characteristics of UAVs restricts the performance of the mission package consisted of UAVs. So, the mission package should be formed with UAVs under the leadership of a fifth generation manned aircraft. Thus, we can minimize the limitations, easily penetrate in the deep inside of the enemy territory with minimum risk, make a decision according to ever-changing conditions and finally destroy the strategic targets. In this article, the strengthens and weakness aspects of UAVs are examined by SWOT analysis. And also, it revealed features of a mission package and presented as an example what kind of a mission package we should form in order to get marginal benefit and penetrate into strategic targets with the development of autonomous mission execution capability in the near future.Keywords: UAV, autonomy, mission package, strategic attack, mission planning
Procedia PDF Downloads 5503425 Combination Therapies Targeting Apoptosis Pathways in Pediatric Acute Myeloid Leukemia (AML)
Authors: Ahlam Ali, Katrina Lappin, Jaine Blayney, Ken Mills
Abstract:
Leukaemia is the most frequently (30%) occurring type of paediatric cancer. Of these, approximately 80% are acute lymphoblastic leukaemia (ALL) with acute myeloid leukaemia (AML) cases making up the remaining 20% alongside other leukaemias. Unfortunately, children with AML do not have promising prognosis with only 60% surviving 5 years or longer. It has been highlighted recently the need for age-specific therapies for AML patients, with paediatric AML cases having a different mutational landscape compared with AML diagnosed in adult patients. Drug Repurposing is a recognized strategy in drug discovery and development where an already approved drug is used for diseases other than originally indicated. We aim to identify novel combination therapies with the promise of providing alternative more effective and less toxic induction therapy options. Our in-silico analysis highlighted ‘cell death and survival’ as an aberrant, potentially targetable pathway in paediatric AML patients. On this basis, 83 apoptotic inducing compounds were screened. A preliminary single agent screen was also performed to eliminate potentially toxic chemicals, then drugs were constructed into a pooled library with 10 drugs per well over 160 wells, with 45 possible pairs and 120 triples in each well. Seven cell lines were used during this study to represent the clonality of AML in paediatric patients (Kasumi-1, CMK, CMS, MV11-14, PL21, THP1, MOLM-13). Cytotoxicity was assessed up to 72 hours using CellTox™ Green reagent. Fluorescence readings were normalized to a DMSO control. Z-Score was assigned to each well based on the mean and standard deviation of all the data. Combinations with a Z-Score <2 were eliminated and the remaining wells were taken forward for further analysis. A well was considered ‘successful’ if each drug individually demonstrated a Z-Score <2, while the combination exhibited a Z-Score >2. Each of the ten compounds in one well (155) had minimal or no effect as single agents on cell viability however, a combination of two or more of the compounds resulted in a substantial increase in cell death, therefore the ten compounds were de-convoluted to identify a possible synergistic pair/triple combinations. The screen identified two possible ‘novel’ drug pairing, with BCL2 inhibitor ABT-737, combined with either a CDK inhibitor Purvalanol A, or AKT/ PI3K inhibitor LY294002. (ABT-737- 100 nM+ Purvalanol A- 1 µM) (ABT-737- 100 nM+ LY294002- 2 µM). Three possible triple combinations were identified (LY2409881+Akti-1/2+Purvalanol A, SU9516+Akti-1/2+Purvalanol A, and ABT-737+LY2409881+Purvalanol A), which will be taken forward for examining their efficacy at varying concentrations and dosing schedules, across multiple paediatric AML cell lines for optimisation of maximum synergy. We believe that our combination screening approach has potential for future use with a larger cohort of drugs including FDA approved compounds and patient material.Keywords: AML, drug repurposing, ABT-737, apoptosis
Procedia PDF Downloads 2033424 Effect of 3-Dimensional Knitted Spacer Fabrics Characteristics on Its Thermal and Compression Properties
Authors: Veerakumar Arumugam, Rajesh Mishra, Jiri Militky, Jana Salacova
Abstract:
The thermo-physiological comfort and compression properties of knitted spacer fabrics have been evaluated by varying the different spacer fabric parameters. Air permeability and water vapor transmission of the fabrics were measured using the Textest FX-3300 air permeability tester and PERMETEST. Then thermal behavior of fabrics was obtained by Thermal conductivity analyzer and overall moisture management capacity was evaluated by moisture management tester. Spacer Fabrics compression properties were also tested using Kawabata Evaluation System (KES-FB3). In the KES testing, the compression resilience, work of compression, linearity of compression and other parameters were calculated from the pressure-thickness curves. Analysis of Variance (ANOVA) was performed using new statistical software named QC expert trilobite and Darwin in order to compare the influence of different fabric parameters on thermo-physiological and compression behavior of samples. This study established that the raw materials, type of spacer yarn, density, thickness and tightness of surface layer have significant influence on both thermal conductivity and work of compression in spacer fabrics. The parameter which mainly influence on the water vapor permeability of these fabrics is the properties of raw material i.e. the wetting and wicking properties of fibers. The Pearson correlation between moisture capacity of the fabrics and water vapour permeability was found using statistical software named QC expert trilobite and Darwin. These findings are important requirements for the further designing of clothing for extreme environmental conditions.Keywords: 3D spacer fabrics, thermal conductivity, moisture management, work of compression (WC), resilience of compression (RC)
Procedia PDF Downloads 5423423 Modeling and Simulating Productivity Loss Due to Project Changes
Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier
Abstract:
The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation
Procedia PDF Downloads 2383422 The Effect of Group Counseling on the Victimhood Perceptions of Adolescent Who Are the Subject of Peer Victimization and on Their Coping Strategies
Authors: İsmail Seçer, Taştan Seçer
Abstract:
In this study, the effect of the group counseling on the victimhood perceptions of the primary school 7th and 8th grade students who are determined to be the subject of peer victimization and their dealing way with it was analyzed. The research model is Solomon Four Group Experimental Model. In this model, there are four groups that were determined with random sampling. Two of the groups have been used as experimental group and the other two have been used as control group. Solomon model is defined as real experimental model. In real experimental models, there are multiple groups consisting of subject which have similar characteristics, and selection of the subjects is done with random sampling. For this purpose, 230 students from Kültür Kurumu Primary School in Erzurum were asked to fill Adolescent Peer Victim Form. 100 students whose victim scores were higher and who were determined to be the subject of bullying were talked face to face and informed about the current study, and they were asked if they were willing to participate or not. As a result of these interviews, 60 students were determined to participate in the experimental study and four group consisting of 15 people were created with simple random sampling method. After the groups had been formed, experimental and control group were determined with casting lots. After determining experimental and control groups, an 11-session group counseling activity which was prepared by the researcher according to the literature was applied. The purpose of applying group counseling is to change the ineffective dealing ways with bullying and their victimhood perceptions. Each session was planned to be 75 minutes and applied as planned. In the control groups, counseling activities in the primary school counseling curricula was applied for 11 weeks. As a result of the study, physical, emotional and verbal victimhood perceptions of the participants in the experimental groups were decreased significantly compared to pre-experimental situations and to those in control group. Besides, it was determined that this change observed in the victimhood perceptions of the experimental group occurred independently from the effect of variables such as gender, age and academic success. The first evidence of the study related to the dealing ways is that the scores of the participants in the experimental group related to the ineffective dealing ways such as despair and avoidance is decreased significantly compared to the pre-experimental situation and to those in control group. The second evidence related to the dealing ways is that the scores of the participants in the experimental group related to effective dealing ways such as seeking for help, consulting social support, resistance and optimism is increased significantly compared to the pre-experimental situation and to those in control group. According to the evidence obtained through the study, it can be said that group counseling is an effective approach to change the victimhood perceptions of the individuals who are the subject of bullying and their dealing strategies with it.Keywords: bullying, perception of victimization, coping strategies, ancova analysis
Procedia PDF Downloads 3913421 Omni-Modeler: Dynamic Learning for Pedestrian Redetection
Authors: Michael Karnes, Alper Yilmaz
Abstract:
This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition
Procedia PDF Downloads 763420 A Novel Rapid Well Control Technique Modelled in Computational Fluid Dynamics Software
Authors: Michael Williams
Abstract:
The ability to control a flowing well is of the utmost important. During the kill phase, heavy weight kill mud is circulated around the well. While increasing bottom hole pressure near wellbore formation, the damage is increased. The addition of high density spherical objects has the potential to minimise this near wellbore damage, increase bottom hole pressure and reduce operational time to kill the well. This operational time saving is seen in the rapid deployment of high density spherical objects instead of building high density drilling fluid. The research aims to model the well kill process using a Computational Fluid Dynamics software. A model has been created as a proof of concept to analyse the flow of micron sized spherical objects in the drilling fluid. Initial results show that this new methodology of spherical objects in drilling fluid agrees with traditional stream lines seen in non-particle flow. Additional models have been created to demonstrate that areas of higher flow rate around the bit can lead to increased probability of wash out of formations but do not affect the flow of micron sized spherical objects. Interestingly, areas that experience dimensional changes such as tool joints and various BHA components do not appear at this initial stage to experience increased velocity or create areas of turbulent flow, which could lead to further borehole stability. In conclusion, the initial models of this novel well control methodology have not demonstrated any adverse flow patterns, which would conclude that this model may be viable under field conditions.Keywords: well control, fluid mechanics, safety, environment
Procedia PDF Downloads 1713419 Community Level Vulnerabilities to Climate Change in Cox’s Bazar-Teknaf Coastal Area of Bangladesh
Authors: Pronob Kumar Mozumder, M. Abdur Rob Mollah
Abstract:
This research was conducted in two coastal locations of Bangladesh from February, 2013 to January, 2014.The objective of this research was to assess the potential vulnerabilities of climate change on local ecosystem and people and to identify and recommend local level adaptation strategies to climate change. Focus group discussions, participatory rural appraisal, interviewing local elderly people were conducted. Perceptions about climate change indicate that local people are experiencing impacts of climate change. According to local people, temperature, cyclone, rain, water-logging, siltation, salinity, erosion, and flash flood are increasing. Vulnerability assessment revealed that local people are variously affected by abnormal climate related disasters. This is jeopardizing their livelihoods, risking their lives, health, and their assets. This prevailing climatic situation in the area is also impacting their environmental conditions, biodiversity and natural resources, and their economic activities. The existing adaptation includes using traditional boat and mobile phone while fishing and making house on high land and lower height. Proposed adaptation for fishing boat are using more than 60 feet length with good timber, putting at least 3 longitudinal bar along upper side, using enough vertical side bars. The homestead measures include use of cross bracing of wall frame, roof tying with extra-post by ropes and plantation of timber tree against wind.Keywords: community level vulnerabilities, climate change, Cox’s Bazar-Teknaf Coastal Area, Bangladesh
Procedia PDF Downloads 5373418 An Amended Method for Assessment of Hypertrophic Scars Viscoelastic Parameters
Authors: Iveta Bryjova
Abstract:
Recording of viscoelastic strain-vs-time curves with the aid of the suction method and a follow-up analysis, resulting into evaluation of standard viscoelastic parameters, is a significant technique for non-invasive contact diagnostics of mechanical properties of skin and assessment of its conditions, particularly in acute burns, hypertrophic scarring (the most common complication of burn trauma) and reconstructive surgery. For elimination of the skin thickness contribution, usable viscoelastic parameters deduced from the strain-vs-time curves are restricted to the relative ones (i.e. those expressed as a ratio of two dimensional parameters), like grosselasticity, net-elasticity, biological elasticity or Qu’s area parameters, in literature and practice conventionally referred to as R2, R5, R6, R7, Q1, Q2, and Q3. With the exception of parameters R2 and Q1, the remaining ones substantially depend on the position of inflection point separating the elastic linear and viscoelastic segments of the strain-vs-time curve. The standard algorithm implemented in commercially available devices relies heavily on the experimental fact that the inflection time comes about 0.1 sec after the suction switch-on/off, which depreciates credibility of parameters thus obtained. Although the Qu’s US 7,556,605 patent suggests a method of improving the precision of the inflection determination, there is still room for nonnegligible improving. In this contribution, a novel method of inflection point determination utilizing the advantageous properties of the Savitzky–Golay filtering is presented. The method allows computation of derivatives of smoothed strain-vs-time curve, more exact location of inflection and consequently more reliable values of aforementioned viscoelastic parameters. An improved applicability of the five inflection-dependent relative viscoelastic parameters is demonstrated by recasting a former study under the new method, and by comparing its results with those provided by the methods that have been used so far.Keywords: Savitzky–Golay filter, scarring, skin, viscoelasticity
Procedia PDF Downloads 3043417 On Lie-Central Derivations and Almost Inner Lie-Derivations of Leibniz Algebras
Authors: Natalia Pacheco Rego
Abstract:
The Liezation functor is a map from the category of Leibniz algebras to the category of Lie algebras, which assigns a Leibniz algebra to the Lie algebra given by the quotient of the Leibniz algebra by the ideal spanned by the square elements of the Leibniz algebra. This functor is left adjoint to the inclusion functor that considers a Lie algebra as a Leibniz algebra. This environment fits in the framework of central extensions and commutators in semi-abelian categories with respect to a Birkhoff subcategory, where classical or absolute notions are relative to the abelianization functor. Classical properties of Leibniz algebras (properties relative to the abelianization functor) were adapted to the relative setting (with respect to the Liezation functor); in general, absolute properties have the corresponding relative ones, but not all absolute properties immediately hold in the relative case, so new requirements are needed. Following this line of research, it was conducted an analysis of central derivations of Leibniz algebras relative to the Liezation functor, called as Lie-derivations, and a characterization of Lie-stem Leibniz algebras by their Lie-central derivations was obtained. In this paper, we present an overview of these results, and we analyze some new properties concerning Lie-central derivations and almost inner Lie-derivations. Namely, a Leibniz algebra is a vector space equipped with a bilinear bracket operation satisfying the Leibniz identity. We define the Lie-bracket by [x, y]lie = [x, y] + [y, x] , for all x, y . The Lie-center of a Leibniz algebra is the two-sided ideal of elements that annihilate all the elements in the Leibniz algebra through the Lie-bracket. A Lie-derivation is a linear map which acts as a derivative with respect to the Lie-bracket. Obviously, usual derivations are Lie-derivations, but the converse is not true in general. A Lie-derivation is called a Lie-central derivation if its image is contained in the Lie-center. A Lie-derivation is called an almost inner Lie-derivation if the image of an element x is contained in the Lie-commutator of x and the Leibniz algebra. The main results we present in this talk refer to the conditions under which Lie-central derivation and almost inner Lie-derivations coincide.Keywords: almost inner Lie-derivation, Lie-center, Lie-central derivation, Lie-derivation
Procedia PDF Downloads 1353416 Research Progress of the Relationship between Urban Rail Transit and Residents' Travel Behavior during 1999-2019: A Scientific Knowledge Mapping Based on Citespace and Vosviewer
Authors: Zheng Yi
Abstract:
Among the attempts made worldwide to foster urban and transport sustainability, transit-oriented development certainly is one of the most successful. Residents' travel behavior is a concern in the researches about the impacts of transit-oriented development. The study takes 620 English journal papers in the core collection database of Web of Science as the study objects; the paper tries to map out the scientific knowledge mapping in the field and draw the basic conditions by co-citation analysis, co-word analysis, a total of citation network analysis and visualization techniques. This study teases out the research hotspots and evolution of the relationship between urban rail transit and resident's travel behavior from 1999 to 2019. According to the results of the analysis of the time-zone view and burst-detection, the paper discusses the trend of the next stage of international study. The results show that in the past 20 years, the research focuses on these keywords: land use, behavior, model, built environment, impact, travel behavior, walking, physical activity, smart card, big data, simulation, perception. According to different research contents, the key literature is further divided into these topics: the attributes of the built environment, land use, transportation network, transportation policies. The results of this paper can help to understand the related researches and achievements systematically. These results can also provide a reference for identifying the main challenges that relevant researches need to address in the future.Keywords: urban rail transit, travel behavior, knowledge map, evolution of researches
Procedia PDF Downloads 1093415 Finite Element Analysis of the Lumbar Spine after Unilateral and Bilateral Laminotomies and Laminectomy
Authors: Chih-Hsien Chen, Yi-Hung Ho, Chih-Wei Wang, Chih-Wei Chang, Yen-Nien Chen, Chih-Han Chang, Chun-Ting Li
Abstract:
Laminotomy is a spinal decompression surgery compatible with a minimally invasive approach. However, the unilateral laminotomy for bilateral side decompression leads to more perioperative complications than the bilateral laminotomy. Although the unilateral laminotomy removes the least bone tissue among the spinal decompression surgeries, the difference of spinal stability between unilateral and bilateral laminotomy and laminectomy is rarely investigated. This study aims to compare the biomechanical effects of unilateral and bilateral laminotomy and laminectomy on the lumbar spine by finite element (FE) simulation. A three-dimensional FE model of the lumbar spine (L1–L5) was constructed with the vertebral body, discs, and ligaments, as well as the sacrum was constructed. Three different surgical methods, namely unilateral laminotomy, bilateral laminotomy and laminectomy, at L3–L4 and L4–L5 were considered. Partial pedicle and entire ligamentum flavum were removed to simulate bilateral decompression in laminotomy. The entire lamina and spinal processes from the lower L3 to upper L5 were detached in the laminectomy model. Then, four kinds of loadings, namely flexion, extension, lateral bending and rotation, were applied on the lumbar with various decompression conditions. The results indicated that the bilateral and unilateral laminotomy both increased the range of motion (ROM) compared with intact lumbar, while the laminectomy increased more ROM than both laminotomy did. The difference of ROM between the bilateral and unilateral laminotomy was very minor. Furthermore, bilateral laminotomy demonstrated similar poster element stress with unilateral laminotomy. Unilateral and bilateral laminotomy are equally suggested to bilateral decompression of lumbar spine with minimally invasive technique because limited effect was aroused due to more bone remove in the bilateral laminotomy on the lumbar stability. Furthermore, laminectomy is the last option for lumbar decompression.Keywords: minimally invasive technique, lumbar decompression, laminotomy, laminectomy, finite element method
Procedia PDF Downloads 1863414 The Effect of Main Factors on Forces during FSJ Processing of AA2024 Aluminum
Authors: Dunwen Zuo, Yongfang Deng, Bo Song
Abstract:
An attempt is made here to measure the forces of three directions, under conditions of different feed speeds, different tilt angles of tool and without or with the pin on the tool, by using octagonal ring dynamometer in the AA2024 aluminum FSJ (Friction Stir Joining) process, and investigate how four main factors influence forces in the FSJ process. It is found that, high feed speed lead to small feed force and small lateral force, but high feed speed leads to large feed force in the stable joining stage of process. As the rotational speed increasing, the time of axial force drop from the maximum to the minimum required increased in the push-up process. In the stable joining stage, the rotational speed has little effect on the feed force; large rotational speed leads to small lateral force and axial force. The maximum axial force increases as the tilt angle of tool increases at the downward movement stage. At the moment of start feeding, as tilt angle of tool increases, the amplitudes of the axial force increasing become large. In the stable joining stage, with the increase of tilt angle of tool, the axial force is increased, the lateral force is decreased, and the feed force almost unchanged. The tool with pin will decrease axial force in the downward movement stage. The feed force and lateral force will increase, but the axial force will reduced in the stable joining stage by using the tool with pin compare to by using the tool without pin.Keywords: FSJ, force factor, AA2024 aluminum, friction stir joining
Procedia PDF Downloads 4913413 DNA Damage and Apoptosis Induced in Drosophila melanogaster Exposed to Different Duration of 2400 MHz Radio Frequency-Electromagnetic Fields Radiation
Authors: Neha Singh, Anuj Ranjan, Tanu Jindal
Abstract:
Over the last decade, the exponential growth of mobile communication has been accompanied by a parallel increase in density of electromagnetic fields (EMF). The continued expansion of mobile phone usage raises important questions as EMF, especially radio frequency (RF), have long been suspected of having biological effects. In the present experiments, we studied the effects of RF-EMF on cell death (apoptosis) and DNA damage of a well- tested biological model, Drosophila melanogaster exposed to 2400 MHz frequency for different time duration i.e. 2 hrs, 4 hrs, 6 hrs,8 hrs, 10 hrs, and 12 hrs each day for five continuous days in ambient temperature and humidity conditions inside an exposure chamber. The flies were grouped into control, sham-exposed, and exposed with 100 flies in each group. In this study, well-known techniques like Comet Assay and TUNEL (Terminal deoxynucleotide transferase dUTP Nick End Labeling) Assay were used to detect DNA damage and for apoptosis studies, respectively. Experiments results showed DNA damage in the brain cells of Drosophila which increases as the duration of exposure increases when observed under the observed when we compared results of control, sham-exposed, and exposed group which indicates that EMF radiation-induced stress in the organism that leads to DNA damage and cell death. The process of apoptosis and mutation follows similar pathway for all eukaryotic cells; therefore, studying apoptosis and genotoxicity in Drosophila makes similar relevance for human beings as well.Keywords: cell death, apoptosis, Comet Assay, DNA damage, Drosophila, electromagnetic fields, EMF, radio frequency, RF, TUNEL assay
Procedia PDF Downloads 1693412 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 2073411 Organizational Ideologies and Their Embeddedness in Fashion Show Productions in Shanghai and London Fashion Week: International-Based-Chinese Independent Designers' Participatory Behaviors in Different Fashion Cities
Authors: Zhe Wang
Abstract:
The fashion week, as a critical international fashion event in shaping world fashion cities, is one of the most significant world events that serves as the core medium for designers to stage new collections. However, its role in bringing about and shaping design ideologies of major fashion cities have long been neglected from a fashion ecosystem perspective. With the expanding scale of international fashion weeks in terms of culture and commerce, the organizational structures of these fashion weeks are becoming more complex. In the emerging fashion city, typified by Shanghai, a newly-formed 'hodgepodge' transforming the current global fashion ecosystem. A city’s legitimate fashion institutions, typically the organizers of international fashion weeks, have cultivated various cultural characteristics via rules and regulations pertaining to international fashion weeks. Under these circumstances, designers’ participatory behaviors, specifically show design and production, are influenced by the cultural ideologies of official organizers and institutions. This research compares international based Chinese (IBC) independent designers’ participatory behavior in London and Shanghai Fashion Weeks: specifically, the way designers present their clothing and show production. both of which are found to be profoundly influenced by cultural and design ideologies of fashion weeks. They are, to a large degree, manipulated by domestic institutions and organizers. Shanghai fashion week has given rise to a multiple, mass-ended entertainment carnival design and cultural ideology in Shanghai, thereby impacting the explicit cultural codes or intangible rules that IBC designers must adhere to when designing and producing fashion shows. Therefore, influenced by various cultural characteristics in the two cities, IBC designers’ show design and productions, in turn, play an increasingly vital role in shaping the design characteristic of an international fashion week. Through researching the organizational systems and design preferences of organizers of London and Shanghai fashion weeks, this paper demonstrates the embeddedness of design systems in the forming of design ideologies under various cultural and institutional contexts. The core methodology utilized in this research is ethnography. As a crucial part of a Ph.D. project on innovations in fashion shows under a cross-cultural context run by Edinburgh College of Art, School of Design, the fashion week’s organizational culture in various cultural contexts is investigated in London and Shanghai for approximately six months respectively. Two IBC designers, Angel Chen and Xuzhi Chen were followed during their participation of London and Shanghai Fashion Weeks from September 2016 to June 2017, during which two consecutive seasons were researched in order to verify the consistency of design ideologies’ associations with organizational system and culture.Keywords: institutional ideologies, international fashion weeks, IBC independent designers; fashion show
Procedia PDF Downloads 118