Search results for: capability curve
319 On the Optimality Assessment of Nano-Particle Size Spectrometry and Its Association to the Entropy Concept
Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani
Abstract:
Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nano-particles under the influence of electric field in electrical mobility spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined field-diffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multi-channel EMS. The result, a cloud of particles with non-uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using computational fluid dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.Keywords: aerosol nano-particle, CFD, electrical mobility spectrometer, von neumann entropy
Procedia PDF Downloads 343318 Estimates of Freshwater Content from ICESat-2 Derived Dynamic Ocean Topography
Authors: Adan Valdez, Shawn Gallaher, James Morison, Jordan Aragon
Abstract:
Global climate change has impacted atmospheric temperatures contributing to rising sea levels, decreasing sea ice, and increased freshening of high latitude oceans. This freshening has contributed to increased stratification inhibiting local mixing and nutrient transport and modifying regional circulations in polar oceans. In recent years, the Western Arctic has seen an increase in freshwater volume at an average rate of 397+-116 km3/year. The majority of the freshwater volume resides in the Beaufort Gyre surface lens driven by anticyclonic wind forcing, sea ice melt, and Arctic river runoff. The total climatological freshwater content is typically defined as water fresher than 34.8. The near-isothermal nature of Arctic seawater and non-linearities in the equation of state for near-freezing waters result in a salinity driven pycnocline as opposed to the temperature driven density structure seen in the lower latitudes. In this study, we investigate the relationship between freshwater content and remotely sensed dynamic ocean topography (DOT). In-situ measurements of freshwater content are useful in providing information on the freshening rate of the Beaufort Gyre; however, their collection is costly and time consuming. NASA’s Advanced Topographic Laser Altimeter System (ATLAS) derived dynamic ocean topography (DOT), and Air Expendable CTD (AXCTD) derived Freshwater Content are used to develop a linear regression model. In-situ data for the regression model is collected across the 150° West meridian, which typically defines the centerline of the Beaufort Gyre. Two freshwater content models are determined by integrating the freshwater volume between the surface and an isopycnal corresponding to reference salinities of 28.7 and 34.8. These salinities correspond to those of the winter pycnocline and total climatological freshwater content, respectively. Using each model, we determine the strength of the linear relationship between freshwater content and satellite derived DOT. The result of this modeling study could provide a future predictive capability of freshwater volume changes in the Beaufort-Chukchi Sea using non in-situ methods. Successful employment of the ICESat-2’s DOT approximation of freshwater content could potentially reduce reliance on field deployment platforms to characterize physical ocean properties.Keywords: ICESat-2, dynamic ocean topography, freshwater content, beaufort gyre
Procedia PDF Downloads 86317 Integrating Virtual Reality and Building Information Model-Based Quantity Takeoffs for Supporting Construction Management
Authors: Chin-Yu Lin, Kun-Chi Wang, Shih-Hsu Wang, Wei-Chih Wang
Abstract:
A construction superintendent needs to know not only the amount of quantities of cost items or materials completed to develop a daily report or calculate the daily progress (earned value) in each day, but also the amount of quantities of materials (e.g., reinforced steel and concrete) to be ordered (or moved into the jobsite) for performing the in-progress or ready-to-start construction activities (e.g., erection of reinforced steel and concrete pouring). These daily construction management tasks require great effort in extracting accurate quantities in a short time (usually must be completed right before getting off work every day). As a result, most superintendents can only provide these quantity data based on either what they see on the site (high inaccuracy) or the extraction of quantities from two-dimension (2D) construction drawings (high time consumption). Hence, the current practice of providing the amount of quantity data completed in each day needs improvement in terms of more accuracy and efficiency. Recently, a three-dimension (3D)-based building information model (BIM) technique has been widely applied to support construction quantity takeoffs (QTO) process. The capability of virtual reality (VR) allows to view a building from the first person's viewpoint. Thus, this study proposes an innovative system by integrating VR (using 'Unity') and BIM (using 'Revit') to extract quantities to support the above daily construction management tasks. The use of VR allows a system user to be present in a virtual building to more objectively assess the construction progress in the office. This VR- and BIM-based system is also facilitated by an integrated database (consisting of the information and data associated with the BIM model, QTO, and costs). In each day, a superintendent can work through a BIM-based virtual building to quickly identify (via a developed VR shooting function) the building components (or objects) that are in-progress or finished in the jobsite. And he then specifies a percentage (e.g., 20%, 50% or 100%) of completion of each identified building object based on his observation on the jobsite. Next, the system will generate the completed quantities that day by multiplying the specified percentage by the full quantities of the cost items (or materials) associated with the identified object. A building construction project located in northern Taiwan is used as a case study to test the benefits (i.e., accuracy and efficiency) of the proposed system in quantity extraction for supporting the development of daily reports and the orders of construction materials.Keywords: building information model, construction management, quantity takeoffs, virtual reality
Procedia PDF Downloads 132316 Detection of Aflatoxin B1 Producing Aspergillus flavus Genes from Maize Feed Using Loop-Mediated Isothermal Amplification (LAMP) Technique
Authors: Sontana Mimapan, Phattarawadee Wattanasuntorn, Phanom Saijit
Abstract:
Aflatoxin contamination in maize, one of several agriculture crops grown for livestock feeding, is still a problem throughout the world mainly under hot and humid weather conditions like Thailand. In this study Aspergillus flavus (A. Flavus), the key fungus for aflatoxin production especially aflatoxin B1 (AFB1), isolated from naturally infected maize were identified and characterized according to colony morphology and PCR using ITS, Beta-tubulin and calmodulin genes. The strains were analysed for the presence of four aflatoxigenic biosynthesis genes in relation to their capability to produce AFB1, Ver1, Omt1, Nor1, and aflR. Aflatoxin production was then confirmed using immunoaffinity column technique. A loop-mediated isothermal amplification (LAMP) was applied as an innovative technique for rapid detection of target nucleic acid. The reaction condition was optimized at 65C for 60 min. and calcein flurescent reagent was added before amplification. The LAMP results showed clear differences between positive and negative reactions in end point analysis under daylight and UV light by the naked eye. In daylight, the samples with AFB1 producing A. Flavus genes developed a yellow to green color, but those without the genes retained the orange color. When excited with UV light, the positive samples become visible by bright green fluorescence. LAMP reactions were positive after addition of purified target DNA until dilutions of 10⁻⁶. The reaction products were then confirmed and visualized with 1% agarose gel electrophoresis. In this regards, 50 maize samples were collected from dairy farms and tested for the presence of four aflatoxigenic biosynthesis genes using LAMP technique. The results were positive in 18 samples (36%) but negative in 32 samples (64%). All of the samples were rechecked by PCR and the results were the same as LAMP, indicating 100% specificity. Additionally, when compared with the immunoaffinity column-based aflatoxin analysis, there was a significant correlation between LAMP results and aflatoxin analysis (r= 0.83, P < 0.05) which suggested that positive maize samples were likely to be a high- risk feed. In conclusion, the LAMP developed in this study can provide a simple and rapid approach for detecting AFB1 producing A. Flavus genes from maize and appeared to be a promising tool for the prediction of potential aflatoxigenic risk in livestock feedings.Keywords: Aflatoxin B1, Aspergillus flavus genes, maize, loop-mediated isothermal amplification
Procedia PDF Downloads 240315 The Impact of HKUST-1 Metal-Organic Framework Pretreatment on Dynamic Acetaldehyde Adsorption
Authors: M. François, L. Sigot, C. Vallières
Abstract:
Volatile Organic Compounds (VOCs) are a real health issue, particularly in domestic indoor environments. Among these VOCs, acetaldehyde is frequently monitored in dwellings ‘air, especially due to smoking and spontaneous emissions from the new wall and soil coverings. It is responsible for respiratory complaints and is classified as possibly carcinogenic to humans. Adsorption processes are commonly used to remove VOCs from the air. Metal-Organic Frameworks (MOFs) are a promising type of material for high adsorption performance. These hybrid porous materials composed of metal inorganic clusters and organic ligands are interesting thanks to their high porosity and surface area. The HKUST-1 (also referred to as MOF-199) is a copper-based MOF with the formula [Cu₃(BTC)₂(H₂O)₃]n (BTC = benzene-1,3,5-tricarboxylate) and exhibits unsaturated metal sites that can be attractive sites for adsorption. The objective of this study is to investigate the impact of HKUST-1 pretreatment on acetaldehyde adsorption. Thus, dynamic adsorption experiments were conducted in 1 cm diameter glass column packed with 2 cm MOF bed height. MOF were sieved to 630 µm - 1 mm. The feed gas (Co = 460 ppmv ± 5 ppmv) was obtained by diluting a 1000 ppmv acetaldehyde gas cylinder in air. The gas flow rate was set to 0.7 L/min (to guarantee a suitable linear velocity). Acetaldehyde concentration was monitored online by gas chromatography coupled with a flame ionization detector (GC-FID). Breakthrough curves must allow to understand the interactions between the MOF and the pollutant as well as the impact of the HKUST-1 humidity in the adsorption process. Consequently, different MOF water content conditions were tested, from a dry material with 7 % water content (dark blue color) to water saturated state with approximately 35 % water content (turquoise color). The rough material – without any pretreatment – containing 30 % water serves as a reference. First, conclusions can be drawn from the comparison of the evolution of the ratio of the column outlet concentration (C) on the inlet concentration (Co) as a function of time for different HKUST-1 pretreatments. The shape of the breakthrough curves is significantly different. The saturation of the rough material is slower (20 h to reach saturation) than that of the dried material (2 h). However, the breakthrough time defined for C/Co = 10 % appears earlier in the case of the rough material (0.75 h) compared to the dried HKUST-1 (1.4 h). Another notable difference is the shape of the curve before the breakthrough at 10 %. An abrupt increase of the outlet concentration is observed for the material with the lower humidity in comparison to a smooth increase for the rough material. Thus, the water content plays a significant role on the breakthrough kinetics. This study aims to understand what can explain the shape of the breakthrough curves associated to the pretreatments of HKUST-1 and which mechanisms take place in the adsorption process between the MOF, the pollutant, and the water.Keywords: acetaldehyde, dynamic adsorption, HKUST-1, pretreatment influence
Procedia PDF Downloads 237314 Antioxidant, Hypoglycemic and Hypotensive Effects Affected by Various Molecular Weights of Cold Water Extract from Pleurotus Citrinopileatus
Authors: Pao-Huei Chen, Shu-Mei Lin, Yih-Ming Weng, Zer-Ran Yu, Be-Jen Wang
Abstract:
Pancreatic α-amylase and intestinal α-glucosidase are the critical enzymes for the breakdown of complex carbohydrates into di- or mono-saccharide, which play an important role in modulating postprandial blood sugars. Angiotensin converting enzyme (ACE) converts inactive angiotensin-I into active angiotensin-II, which subsequently increase blood pressure through triggering vasoconstriction and aldosterone secretion. Thus, inhibition of carbohydrate-digestion enzymes and ACE will help the management of blood glucose and blood pressure, respectively. Studies showed Pleurotus citrinopileatus (PC), an edible mushroom and commonly cultured in oriental countries, exerted anticancer, immune improving, antioxidative, hypoglycemic and hypolipidemic effects. Previous studies also showed various molecular weights (MW) fractioned from extracts may affect biological activities due to varying contents of bioactive components. Thus, the objective of this study is to investigate the in vitro antioxidant, hypoglycemic and hypotenstive effects and distribution of active compounds of various MWs of cold water extract from P. citrinopileatus (CWEPC). CWEPC was fractioned into four various MW fractions, PC-I (<1 kDa), PC-II (1-3.5 kDa), PC-III (3.5-10 kDa), and PC-IV (>10 kDa), using an ultrafiltration system. The physiological activities, including antioxidant activities, the inhibition capabilities of pancreatic α-amylase, intestinal α-glucosidase, and hypertension-linked ACE, and the active components, including polysaccharides, protein, and phenolic contents, of CWEPC and four fractions were determined. The results showed that fractions with lower MW exerted a higher antioxidant activity (p<0.05), which was positively correlated to the levels of total phenols. In contrast, the inhibition effects on the activities of α-amylase, α-glucosidase, and ACE of PC-IV fraction were significantly higher than CWEPC and the other three low MW fractions (< 10 kDa), which was more related to protein contents. The inhibition capability of CWEPC and PC-IV on α-amylase activity was 1/13.4 to 1/2.7 relative to that of acarbose (positive control), respectively. However, the inhibitory ability of PC-IV on α-glucosidase (IC50 = 0.5 mg/mL) was significantly higher than acarbose (IC50 = 1.7 mg/mL). Kinetic data revealed that PC-IV fraction followed a non-competitive inhibition on α-glucosidase activity. In conclusion, the distribution of various bioactive components contribute to the functions of different MW fractions on oxidative stress prevention, and blood pressure and glucose modulation.Keywords: α-Amylase, angiotensin converting enzyme, α-Glucosidase, Pleurotus citrinopileatus
Procedia PDF Downloads 460313 Placement Characteristics of Major Stream Vehicular Traffic at Median Openings
Authors: Tathagatha Khan, Smruti Sourava Mohapatra
Abstract:
Median openings are provided in raised median of multilane roads to facilitate U-turn movement. The U-turn movement is a highly complex and risky maneuver because U-turning vehicle (minor stream) makes 180° turns at median openings and merge with the approaching through traffic (major stream). A U-turning vehicle requires a suitable gap in the major stream to merge, and during this process, the possibility of merging conflict develops. Therefore, these median openings are potential hot spot of conflict and posses concern pertaining to safety. The traffic at the median openings could be managed efficiently with enhanced safety when the capacity of a traffic facility has been estimated correctly. The capacity of U-turns at median openings is estimated by Harder’s formula, which requires three basic parameters namely critical gap, follow up time and conflict flow rate. The estimation of conflicting flow rate under mixed traffic condition is very much complicated due to absence of lane discipline and discourteous behavior of the drivers. The understanding of placement of major stream vehicles at median opening is very much important for the estimation of conflicting traffic faced by U-turning movement. The placement data of major stream vehicles at different section in 4-lane and 6-lane divided multilane roads were collected. All the test sections were free from the effect of intersection, bus stop, parked vehicles, curvature, pedestrian movements or any other side friction. For the purpose of analysis, all the vehicles were divided into 6 categories such as motorized 2W, autorickshaw (3-W), small car, big car, light commercial vehicle, and heavy vehicle. For the collection of placement data of major stream vehicles, the entire road width was divided into sections of 25 cm each and these were numbered seriatim from the pavement edge (curbside) to the end of the road. The placement major stream vehicle crossing the reference line was recorded by video graphic technique on various weekdays. The collected data for individual category of vehicles at all the test sections were converted into a frequency table with a class interval of 25 cm each and the placement frequency curve. Separate distribution fittings were tried for 4- lane and 6-lane divided roads. The variation of major stream traffic volume on the placement characteristics of major stream vehicles has also been explored. The findings of this study will be helpful to determine the conflict volume at the median openings. So, the present work holds significance in traffic planning, operation and design to alleviate the bottleneck, prospect of collision and delay at median opening in general and at median opening in developing countries in particular.Keywords: median opening, U-turn, conflicting traffic, placement, mixed traffic
Procedia PDF Downloads 138312 Component Test of Martensitic/Ferritic Steels and Nickel-Based Alloys and Their Welded Joints under Creep and Thermo-Mechanical Fatigue Loading
Authors: Daniel Osorio, Andreas Klenk, Stefan Weihe, Andreas Kopp, Frank Rödiger
Abstract:
Future power plants currently face high design requirements due to worsening climate change and environmental restrictions, which demand high operational flexibility, superior thermal performance, minimal emissions, and higher cyclic capability. The aim of the paper is, therefore, to investigate the creep and thermo-mechanical material behavior of improved materials experimentally and welded joints at component scale under near-to-service operating conditions, which are promising for application in highly efficient and flexible future power plants. These materials promise an increase in flexibility and a reduction in manufacturing costs by providing enhanced creep strength and, therefore, the possibility for wall thickness reduction. At the temperature range between 550°C and 625°C, the investigation focuses on the in-phase thermo-mechanical fatigue behavior of dissimilar welded joints of conventional materials (ferritic and martensitic material T24 and T92) to nickel-based alloys (A617B and HR6W) by means of membrane test panels. The temperature and external load are varied in phase during the test, while the internal pressure remains constant. At the temperature range between 650°C and 750°C, it focuses on the creep behavior under multiaxial stress loading of similar and dissimilar welded joints of high temperature resistant nickel-based alloys (A740H, A617B, and HR6W) by means of a thick-walled-component test. In this case, the temperature, the external axial load, and the internal pressure remain constant during testing. Numerical simulations are used for the estimation of the axial component load in order to induce a meaningful damage evolution without causing a total component failure. Metallographic investigations after testing will provide support for understanding the damage mechanism and the influence of the thermo-mechanical load and multiaxiality on the microstructure change and on the creep and TMF- strength.Keywords: creep, creep-fatigue, component behaviour, weld joints, high temperature material behaviour, nickel-alloys, high temperature resistant steels
Procedia PDF Downloads 119311 Structural Optimization, Design, and Fabrication of Dissolvable Microneedle Arrays
Authors: Choupani Andisheh, Temucin Elif Sevval, Bediz Bekir
Abstract:
Due to their various advantages compared to many other drug delivery systems such as hypodermic injections and oral medications, microneedle arrays (MNAs) are a promising drug delivery system. To achieve enhanced performance of the MN, it is crucial to develop numerical models, optimization methods, and simulations. Accordingly, in this work, the optimized design of dissolvable MNAs, as well as their manufacturing, is investigated. For this purpose, a mechanical model of a single MN, having the geometry of an obelisk, is developed using commercial finite element software. The model considers the condition in which the MN is under pressure at the tip caused by the reaction force when penetrating the skin. Then, a multi-objective optimization based on non-dominated sorting genetic algorithm II (NSGA-II) is performed to obtain geometrical properties such as needle width, tip (apex) angle, and base fillet radius. The objective of the optimization study is to reach a painless and effortless penetration into the skin along with minimizing its mechanical failures caused by the maximum stress occurring throughout the structure. Based on the obtained optimal design parameters, master (male) molds are then fabricated from PMMA using a mechanical micromachining process. This fabrication method is selected mainly due to the geometry capability, production speed, production cost, and the variety of materials that can be used. Then to remove any chip residues, the master molds are cleaned using ultrasonic cleaning. These fabricated master molds can then be used repeatedly to fabricate Polydimethylsiloxane (PDMS) production (female) molds through a micro-molding approach. Finally, Polyvinylpyrrolidone (PVP) as a dissolvable polymer is cast into the production molds under vacuum to produce the dissolvable MNAs. This fabrication methodology can also be used to fabricate MNAs that include bioactive cargo. To characterize and demonstrate the performance of the fabricated needles, (i) scanning electron microscope images are taken to show the accuracy of the fabricated geometries, and (ii) in-vitro piercing tests are performed on artificial skin. It is shown that optimized MN geometries can be precisely fabricated using the presented fabrication methodology and the fabricated MNAs effectively pierce the skin without failure.Keywords: microneedle, microneedle array fabrication, micro-manufacturing structural optimization, finite element analysis
Procedia PDF Downloads 113310 Life Time Improvement of Clamp Structural by Using Fatigue Analysis
Authors: Pisut Boonkaew, Jatuporn Thongsri
Abstract:
In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability
Procedia PDF Downloads 235309 Nanoliposomes in Photothermal Therapy: Advancements and Applications
Authors: Mehrnaz Mostafavi
Abstract:
Nanoliposomes, minute lipid-based vesicles at the nano-scale, show promise in the realm of photothermal therapy (PTT). This study presents an extensive overview of nanoliposomes in PTT, exploring their distinct attributes and the significant progress in this therapeutic methodology. The research delves into the fundamental traits of nanoliposomes, emphasizing their adaptability, compatibility with biological systems, and their capacity to encapsulate diverse therapeutic substances. Specifically, it examines the integration of light-absorbing materials, like gold nanoparticles or organic dyes, into nanoliposomal formulations, enabling their efficacy as proficient agents for photothermal treatment Additionally, this paper elucidates the mechanisms involved in nanoliposome-mediated PTT, highlighting their capability to convert light energy into localized heat, facilitating the precise targeting of diseased cells or tissues. This precise regulation of light absorption and heat generation by nanoliposomes presents a non-invasive and precisely focused therapeutic approach, particularly in conditions like cancer. The study explores advancements in nanoliposomal formulations aimed at optimizing PTT outcomes. These advancements include strategies for improved stability, enhanced drug loading, and the targeted delivery of therapeutic agents to specific cells or tissues. Furthermore, the paper discusses multifunctional nanoliposomal systems, integrating imaging components or targeting elements for real-time monitoring and improved accuracy in PTT. Moreover, the review highlights recent preclinical and clinical trials showcasing the effectiveness and safety of nanoliposome-based PTT across various disease models. It also addresses challenges in clinical implementation, such as scalability, regulatory considerations, and long-term safety assessments. In conclusion, this paper underscores the substantial potential of nanoliposomes in advancing PTT as a promising therapeutic approach. Their distinctive characteristics, combined with their precise ability to convert light into heat, offer a tailored and efficient method for treating targeted diseases. The encouraging outcomes from preclinical studies pave the way for further exploration and potential clinical applications of nanoliposome-based PTT.Keywords: nanoliposomes, photothermal therapy, light absorption, heat conversion, therapeutic agents, targeted delivery, cancer therapy
Procedia PDF Downloads 112308 Establishing a Sustainable Construction Industry: Review of Barriers That Inhibit Adoption of Lean Construction in Lesotho
Authors: Tsepiso Mofolo, Luna Bergh
Abstract:
The Lesotho construction industry fails to embrace environmental practices, which has then lead to excessive consumption of resources, land degradation, air and water pollution, loss of habitats, and high energy usage. The industry is highly inefficient, and this undermines its capability to yield the optimum contribution to social, economic and environmental developments. Sustainable construction is, therefore, imperative to ensure the cultivation of benefits from all these intrinsic themes of sustainable development. The development of a sustainable construction industry requires a holistic approach that takes into consideration the interaction between Lean Construction principles, socio-economic and environmental policies, technological advancement and the principles of construction or project management. Sustainable construction is a cutting-edge phenomenon, forming a component of a subjectively defined concept called sustainable development. Sustainable development can be defined in terms of attitudes and judgments to assist in ensuring long-term environmental, social and economic growth in society. The key concept of sustainable construction is Lean Construction. Lean Construction emanates from the principles of the Toyota Production System (TPS), namely the application and adaptation of the fundamental concepts and principles that focus on waste reduction, the increase in value to the customer, and continuous improvement. The focus is on the reduction of socio-economic waste, and protestation of environmental degradation by reducing carbon dioxide emission footprint. Lean principles require a fundamental change in the behaviour and attitudes of the parties involved in order to overcome barriers to cooperation. Prevalent barriers to adoption of Lean Construction in Lesotho are mainly structural - such as unavailability of financing, corruption, operational inefficiency or wastage, lack of skills and training and inefficient construction legislation and political interferences. The consequential effects of these problems trigger down to quality, cost and time of the project - which then result in an escalation of operational costs due to the cost of rework or material wastage. Factor and correlation analysis of these barriers indicate that they are highly correlated, which then poses a detrimental potential to the country’s welfare, environment and construction safety. It is, therefore, critical for Lesotho’s construction industry to develop a robust governance through bureaucracy reforms and stringent law enforcement.Keywords: construction industry, sustainable development, sustainable construction industry, lean construction, barriers to sustainable construction
Procedia PDF Downloads 294307 Nanostructured Pt/MnO2 Catalysts and Their Performance for Oxygen Reduction Reaction in Air Cathode Microbial Fuel Cell
Authors: Maksudur Rahman Khan, Kar Min Chan, Huei Ruey Ong, Chin Kui Cheng, Wasikur Rahman
Abstract:
Microbial fuel cells (MFCs) represent a promising technology for simultaneous bioelectricity generation and wastewater treatment. Catalysts are significant portions of the cost of microbial fuel cell cathodes. Many materials have been tested as aqueous cathodes, but air-cathodes are needed to avoid energy demands for water aeration. The sluggish oxygen reduction reaction (ORR) rate at air cathode necessitates efficient electrocatalyst such as carbon supported platinum catalyst (Pt/C) which is very costly. Manganese oxide (MnO2) was a representative metal oxide which has been studied as a promising alternative electrocatalyst for ORR and has been tested in air-cathode MFCs. However, the single MnO2 has poor electric conductivity and low stability. In the present work, the MnO2 catalyst has been modified by doping Pt nanoparticle. The goal of the work was to improve the performance of the MFC with minimum Pt loading. MnO2 and Pt nanoparticles were prepared by hydrothermal and sol-gel methods, respectively. Wet impregnation method was used to synthesize Pt/MnO2 catalyst. The catalysts were further used as cathode catalysts in air-cathode cubic MFCs, in which anaerobic sludge was inoculated as biocatalysts and palm oil mill effluent (POME) was used as the substrate in the anode chamber. The as-prepared Pt/MnO2 was characterized comprehensively through field emission scanning electron microscope (FESEM), X-Ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and cyclic voltammetry (CV) where its surface morphology, crystallinity, oxidation state and electrochemical activity were examined, respectively. XPS revealed Mn (IV) oxidation state and Pt (0) nanoparticle metal, indicating the presence of MnO2 and Pt. Morphology of Pt/MnO2 observed from FESEM shows that the doping of Pt did not cause change in needle-like shape of MnO2 which provides large contacting surface area. The electrochemical active area of the Pt/MnO2 catalysts has been increased from 276 to 617 m2/g with the increase in Pt loading from 0.2 to 0.8 wt%. The CV results in O2 saturated neutral Na2SO4 solution showed that MnO2 and Pt/MnO2 catalysts could catalyze ORR with different catalytic activities. MFC with Pt/MnO2 (0.4 wt% Pt) as air cathode catalyst generates a maximum power density of 165 mW/m3, which is higher than that of MFC with MnO2 catalyst (95 mW/m3). The open circuit voltage (OCV) of the MFC operated with MnO2 cathode gradually decreased during 14 days of operation, whereas the MFC with Pt/MnO2 cathode remained almost constant throughout the operation suggesting the higher stability of the Pt/MnO2 catalyst. Therefore, Pt/MnO2 with 0.4 wt% Pt successfully demonstrated as an efficient and low cost electrocatalyst for ORR in air cathode MFC with higher electrochemical activity, stability and hence enhanced performance.Keywords: microbial fuel cell, oxygen reduction reaction, Pt/MnO2, palm oil mill effluent, polarization curve
Procedia PDF Downloads 557306 Experimental Study of Vibration Isolators Made of Expanded Cork Agglomerate
Authors: S. Dias, A. Tadeu, J. Antonio, F. Pedro, C. Serra
Abstract:
The goal of the present work is to experimentally evaluate the feasibility of using vibration isolators made of expanded cork agglomerate. Even though this material, also known as insulation cork board (ICB), has mainly been studied for thermal and acoustic insulation purposes, it has strong potential for use in vibration isolation. However, the adequate design of expanded cork blocks vibration isolators will depend on several factors, such as excitation frequency, static load conditions and intrinsic dynamic behavior of the material. In this study, transmissibility tests for different static and dynamic loading conditions were performed in order to characterize the material. Since the material’s physical properties can influence the vibro-isolation performance of the blocks (in terms of density and thickness), this study covered four mass density ranges and four block thicknesses. A total of 72 expanded cork agglomerate specimens were tested. The test apparatus comprises a vibration exciter connected to an excitation mass that holds the test specimen. The test specimens under characterization were loaded successively with steel plates in order to obtain results for different masses. An accelerometer was placed at the top of these masses and at the base of the excitation mass. The test was performed for a defined frequency range, and the amplitude registered by the accelerometers was recorded in time domain. For each of the signals (signal 1- vibration of the excitation mass, signal 2- vibration of the loading mass) a fast Fourier transform (FFT) was applied in order to obtain the frequency domain response. For each of the frequency domain signals, the maximum amplitude reached was registered. The ratio between the amplitude (acceleration) of signal 2 and the amplitude of signal 1, allows the calculation of the transmissibility for each frequency. Repeating this procedure allowed us to plot a transmissibility curve for a certain frequency range. A number of transmissibility experiments were performed to assess the influence of changing the mass density and thickness of the expanded cork blocks and the experimental conditions (static load and frequency of excitation). The experimental transmissibility tests performed in this study showed that expanded cork agglomerate blocks are a good option for mitigating vibrations. It was concluded that specimens with lower mass density and larger thickness lead to better performance, with higher vibration isolation and a larger range of isolated frequencies. In conclusion, the study of the performance of expanded cork agglomerate blocks presented herein will allow for a more efficient application of expanded cork vibration isolators. This is particularly relevant since this material is a more sustainable alternative to other commonly used non-environmentally friendly products, such as rubber.Keywords: expanded cork agglomerate, insulation cork board, transmissibility tests, sustainable materials, vibration isolators
Procedia PDF Downloads 332305 Predicting Mortality among Acute Burn Patients Using BOBI Score vs. FLAMES Score
Authors: S. Moustafa El Shanawany, I. Labib Salem, F. Mohamed Magdy Badr El Dine, H. Tag El Deen Abd Allah
Abstract:
Thermal injuries remain a global health problem and a common issue encountered in forensic pathology. They are a devastating cause of morbidity and mortality in children and adults especially in developing countries, causing permanent disfigurement, scarring and grievous hurt. Burns have always been a matter of legal concern in cases of suicidal burns, self-inflicted burns for false accusation and homicidal attempts. Assessment of burn injuries as well as rating permanent disabilities and disfigurement following thermal injuries for the benefit of compensation claims represents a challenging problem. This necessitates the development of reliable scoring systems to yield an expected likelihood of permanent disability or fatal outcome following burn injuries. The study was designed to identify the risk factors of mortality in acute burn patients and to evaluate the applicability of FLAMES (Fatality by Longevity, APACHE II score, Measured Extent of burn, and Sex) and BOBI (Belgian Outcome in Burn Injury) model scores in predicting the outcome. The study was conducted on 100 adult patients with acute burn injuries admitted to the Burn Unit of Alexandria Main University Hospital, Egypt from October 2014 to October 2015. Victims were examined after obtaining informed consent and the data were collected in specially designed sheets including demographic data, burn details and any associated inhalation injury. Each burn patient was assessed using both BOBI and FLAMES scoring systems. The results of the study show the mean age of patients was 35.54±12.32 years. Males outnumbered females (55% and 45%, respectively). Most patients were accidently burnt (95%), whereas suicidal burns accounted for the remaining 5%. Flame burn was recorded in 82% of cases. As well, 8% of patients sustained more than 60% of total burn surface area (TBSA) burns, 19% of patients needed mechanical ventilation, and 19% of burnt patients died either from wound sepsis, multi-organ failure or pulmonary embolism. The mean length of hospital stay was 24.91±25.08 days. The mean BOBI score was 1.07±1.27 and that of the FLAMES score was -4.76±2.92. The FLAMES score demonstrated an area under the receiver operating characteristic (ROC) curve of 0.95 which was significantly higher than that of the BOBI score (0.883). A statistically significant association was revealed between both predictive models and the outcome. The study concluded that both scoring systems were beneficial in predicting mortality in acutely burnt patients. However, the FLAMES score could be applied with a higher level of accuracy.Keywords: BOBI, burns, FLAMES, scoring systems, outcome
Procedia PDF Downloads 335304 Human Capital Divergence and Team Performance: A Study of Major League Baseball Teams
Authors: Yu-Chen Wei
Abstract:
The relationship between organizational human capital and organizational effectiveness have been a common topic of interest to organization researchers. Much of this research has concluded that higher human capital can predict greater organizational outcomes. Whereas human capital research has traditionally focused on organizations, the current study turns to the team level human capital. In addition, there are no known empirical studies assessing the effect of human capital divergence on team performance. Team human capital refers to the sum of knowledge, ability, and experience embedded in team members. Team human capital divergence is defined as the variation of human capital within a team. This study is among the first to assess the role of human capital divergence as a moderator of the effect of team human capital on team performance. From the traditional perspective, team human capital represents the collective ability to solve problems and reducing operational risk of all team members. Hence, the higher team human capital, the higher the team performance. This study further employs social learning theory to explain the relationship between team human capital and team performance. According to this theory, the individuals will look for progress by way of learning from teammates in their teams. They expect to have upper human capital, in turn, to achieve high productivity, obtain great rewards and career success eventually. Therefore, the individual can have more chances to improve his or her capability by learning from peers of the team if the team members have higher average human capital. As a consequence, all team members can develop a quick and effective learning path in their work environment, and in turn enhance their knowledge, skill, and experience, leads to higher team performance. This is the first argument of this study. Furthermore, the current study argues that human capital divergence is negative to a team development. For the individuals with lower human capital in the team, they always feel the pressure from their outstanding colleagues. Under the pressure, they cannot give full play to their own jobs and lose more and more confidence. For the smart guys in the team, they are reluctant to be colleagues with the teammates who are not as intelligent as them. Besides, they may have lower motivation to move forward because they are prominent enough compared with their teammates. Therefore, human capital divergence will moderate the relationship between team human capital and team performance. These two arguments were tested in 510 team-seasons drawn from major league baseball (1998–2014). Results demonstrate that there is a positive relationship between team human capital and team performance which is consistent with previous research. In addition, the variation of human capital within a team weakens the above relationships. That is to say, an individual working with teammates who are comparable to them can produce better performance than working with people who are either too smart or too stupid to them.Keywords: human capital divergence, team human capital, team performance, team level research
Procedia PDF Downloads 240303 Assessment of Designed Outdoor Playspaces as Learning Environments and Its Impact on Child’s Wellbeing: A Case of Bhopal, India
Authors: Richa Raje, Anumol Antony
Abstract:
Playing is the foremost stepping stone for childhood development. Play is an essential aspect of a child’s development and learning because it creates meaningful enduring environmental connections and increases children’s performance. The children’s proficiencies are ever varying in their course of growth. There is innovation in the activities, as it kindles the senses, surges the love for exploration, overcomes linguistic barriers and physiological development, which in turn allows them to find their own caliber, spontaneity, curiosity, cognitive skills, and creativity while learning during play. This paper aims to comprehend the learning in play which is the most essential underpinning aspect of the outdoor play area. It also assesses the trend of playgrounds design that is merely hammered with equipment's. It attempts to derive a relation between the natural environment and children’s activities and the emotions/senses that can be evoked in the process. One of the major concerns with our outdoor play is that it is limited to an area with a similar kind of equipment, thus making the play highly regimented and monotonous. This problem is often lead by the strict timetables of our education system that hardly accommodates play. Due to these reasons, the play areas remain neglected both in terms of design that allows learning and wellbeing. Poorly designed spaces fail to inspire the physical, emotional, social and psychological development of the young ones. Currently, the play space has been condensed to an enclosed playground, driveway or backyard which confines the children’s capability to leap the boundaries set for him. The paper emphasizes on study related to kids ranging from 5 to 11 years where the behaviors during their interactions in a playground are mapped and analyzed. The theory of affordance is applied to various outdoor play areas, in order to study and understand the children’s environment and how variedly they perceive and use them. A higher degree of affordance shall form the basis for designing the activities suitable in play spaces. It was observed during their play that, they choose certain spaces of interest majority being natural over other artificial equipment. The activities like rolling on the ground, jumping from a height, molding earth, hiding behind tree, etc. suggest that despite equipment they have an affinity towards nature. Therefore, we as designers need to take a cue from their behavior and practices to be able to design meaningful spaces for them, so the child gets the freedom to test their precincts.Keywords: children, landscape design, learning environment, nature and play, outdoor play
Procedia PDF Downloads 124302 A Clinical Cutoff to Identify Metabolically Unhealthy Obese and Normal-Weight Phenotype in Young Adults
Authors: Lívia Pinheiro Carvalho, Luciana Di Thommazo-Luporini, Rafael Luís Luporini, José Carlos Bonjorno Junior, Renata Pedrolongo Basso Vanelli, Manoel Carneiro de Oliveira Junior, Rodolfo de Paula Vieira, Renata Trimer, Renata G. Mendes, Mylène Aubertin-Leheudre, Audrey Borghi-Silva
Abstract:
Rationale: Cardiorespiratory fitness (CRF) and functional capacity in young obese and normal-weight people are associated with metabolic and cardiovascular diseases and mortality. However, it remains unclear whether their metabolically healthy (MH) or at risk (AR) phenotype influences cardiorespiratory fitness in this vulnerable population such as obese adults but also in normal-weight people. HOMA insulin resistance index (HI) and leptin-adiponectin ratio (LA) are strong markers for characterizing those phenotypes that we hypothesized to be associated with physical fitness. We also hypothesized that an easy and feasible exercise test could identify a subpopulation at risk to develop metabolic and related disorders. Methods: Thirty-nine sedentary men and women (20-45y; 18.5Keywords: aerobic capacity, exercise, fitness, metabolism, obesity, 6MST
Procedia PDF Downloads 354301 Labile and Humified Carbon Storage in Natural and Anthropogenically Affected Luvisols
Authors: Kristina Amaleviciute, Ieva Jokubauskaite, Alvyra Slepetiene, Jonas Volungevicius, Inga Liaudanskiene
Abstract:
The main task of this research was to investigate the chemical composition of the differently used soil in profiles. To identify the differences in the soil were investigated organic carbon (SOC) and its fractional composition: dissolved organic carbon (DOC), mobile humic acids (MHA) and C to N ratio of natural and anthropogenically affected Luvisols. Research object: natural and anthropogenically affected Luvisol, Akademija, Kedainiai, distr. Lithuania. Chemical analyses were carried out at the Chemical Research Laboratory of Institute of Agriculture, LAMMC. Soil samples for chemical analyses were taken from the genetics soil horizons. SOC was determined by the Tyurin method modified by Nikitin, measuring with spectrometer Cary 50 (VARIAN) in 590 nm wavelength using glucose standards. For mobile humic acids (MHA) determination the extraction procedure was carried out using 0.1 M NaOH solution. Dissolved organic carbon (DOC) was analyzed using an ion chromatograph SKALAR. pH was measured in 1M H2O. N total was determined by Kjeldahl method. Results: Based on the obtained results, it can be stated that transformation of chemical composition is going through the genetic soil horizons. Morphology of the upper layers of soil profile which is formed under natural conditions was changed by anthropomorphic (agrogenic, urbogenic, technogenic and others) structure. Anthropogenic activities, mechanical and biochemical disturbances destroy the natural characteristics of soil formation and complicates the interpretation of soil development. Due to the intensive cultivation, the pH values of the curve equals (disappears acidification characteristic for E horizon) with natural Luvisol. Luvisols affected by agricultural activities was characterized by a decrease in the absolute amount of humic substances in separate horizons. But there was observed more sustainable, higher carbon sequestration and thicker storage of humic horizon compared with forest Luvisol. However, the average content of humic substances in the soil profile was lower. Soil organic carbon content in anthropogenic Luvisols was lower compared with the natural forest soil, but there was more evenly spread over in the wider thickness of accumulative horizon. These data suggest that the organization of geo-ecological declines and agroecological increases in Luvisols. Acknowledgement: This work was supported by the National Science Program ‘The effect of long-term, different-intensity management of resources on the soils of different genesis and on other components of the agro-ecosystems’ [grant number SIT-9/2015] funded by the Research Council of Lithuania.Keywords: agrogenization, dissolved organic carbon, luvisol, mobile humic acids, soil organic carbon
Procedia PDF Downloads 236300 Research on Innovation Service based on Science and Technology Resources in Beijing-Tianjin-Hebei
Authors: Runlian Miao, Wei Xie, Hong Zhang
Abstract:
In China, Beijing-Tianjin-Hebei is regarded as a strategically important region because itenjoys highest development in economic development, opening up, innovative capacity and andpopulation. Integrated development of Beijing-Tianjin-Hebei region is increasingly emphasized by the government recently years. In 2014, it has ascended to one of the national great development strategies by Chinese central government. In 2015, Coordinated Development Planning Compendium for Beijing-Tianjin-Hebei Region was approved. Such decisions signify Beijing-Tianjin-Hebei region would lead innovation-driven economic development in China. As an essential factor to achieve national innovation-driven development and significant part of regional industry chain, the optimization of science and technology resources allocation will exert great influence to regional economic transformation and upgrading and innovation-driven development. However, unbalanced distribution, poor sharing of resources and existence of information isolated islands have contributed to different interior innovation capability, vitality and efficiency, which impeded innovation and growth of the whole region. Under such a background, to integrate and vitalize regional science and technology resources and then establish high-end, fast-responding and precise innovation service system basing on regional resources, would be of great significance for integrated development of Beijing-Tianjin-Hebei region and even handling of unbalanced and insufficient development problem in China. This research uses the method of literature review and field investigation and applies related theories prevailing home and abroad, centering service path of science and technology resources for innovation. Based on the status quo and problems of regional development of Beijing-Tianjin-Hebei, theoretically, the author proposed to combine regional economics and new economic geography to explore solution to problem of low resource allocation efficiency. Further, the author puts forward to applying digital map into resource management and building a platform for information co-building and sharing. At last, the author presents the thought to establish a specific service mode of ‘science and technology plus digital map plus intelligence research plus platform service’ and suggestion on co-building and sharing mechanism of 3 (Beijing, Tianjin and Hebei ) plus 11 (important cities in Hebei Province).Keywords: Beijing-Tianjin-Hebei, science and technology resources, innovation service, digital platform
Procedia PDF Downloads 161299 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails
Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali
Abstract:
When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis
Procedia PDF Downloads 48298 Disparities in Language Competence and Conflict: The Moderating Role of Cultural Intelligence in Intercultural Interactions
Authors: Catherine Peyrols Wu
Abstract:
Intercultural interactions are becoming increasingly common in organizations and life. These interactions are often the stage of miscommunication and conflict. In management research, these problems are commonly attributed to cultural differences in values and interactional norms. As a result, the notion that intercultural competence can minimize these challenges is widely accepted. Cultural differences, however, are not the only source of a challenge during intercultural interactions. The need to rely on a lingua franca – or common language between people who have different mother tongues – is another important one. In theory, a lingua franca can improve communication and ease coordination. In practice however, disparities in people’s ability and confidence to communicate in the language can exacerbate tensions and generate inefficiencies. In this study, we draw on power theory to develop a model of disparities in language competence and conflict in a multicultural work context. Specifically, we hypothesized that differences in language competence between interaction partners would be positively related to conflict such that people would report greater conflict with partners who have more dissimilar levels of language competence and lesser conflict with partners with more similar levels of language competence. Furthermore, we proposed that cultural intelligence (CQ) an intercultural competence that denotes an individual’s capability to be effective in intercultural situations, would weaken the relationship between disparities in language competence and conflict such that people would report less conflict with partners who have more dissimilar levels of language competence when the interaction partner has high CQ and more conflict when the partner has low CQ. We tested this model with a sample of 135 undergraduate students working in multicultural teams for 13 weeks. We used a round-robin design to examine conflict in 646 dyads nested within 21 teams. Results of analyses using social relations modeling provided support for our hypotheses. Specifically, we found that in intercultural dyads with large disparities in language competence, partners with the lowest level of language competence would report higher levels of interpersonal conflict. However, this relationship disappeared when the partner with higher language competence was also high in CQ. These findings suggest that communication in a lingua franca can be a source of conflict in intercultural collaboration when partners differ in their level of language competence and that CQ can alleviate these effects during collaboration with partners who have relatively lower levels of language competence. Theoretically, this study underscores the benefits of CQ as a complement to language competence for intercultural effectiveness. Practically, these results further attest to the benefits of investing resources to develop language competence and CQ in employees engaged in multicultural work.Keywords: cultural intelligence, intercultural interactions, language competence, multicultural teamwork
Procedia PDF Downloads 165297 The Changing Role of Technology-Enhanced University Library Reform in Improving College Student Learning Experience and Career Readiness – A Qualitative Comparative Analysis (QCA)
Authors: Xiaohong Li, Wenfan Yan
Abstract:
Background: While it is widely considered that the university library plays a critical role in fulfilling the institution's mission and providing students’ learning experience beyond the classrooms, how the technology-enhanced library reform changed college students’ learning experience hasn’t been thoroughly investigated. The purpose of this study is to explore how technology-enhanced library reform affects students’ learning experience and career readiness and further identify the factors and effective conditions that enable the quality learning outcome of Chinese college students. Methodologies: This study selected the qualitative comparative analysis (QCA) method to explore the effects of technology-enhanced university library reform on college students’ learning experience and career readiness. QCA is unique in explaining the complex relationship between multiple factors from a holistic perspective. Compared with the traditional quantitative and qualitative analysis, QCA not only adds some quantitative logic but also inherits the characteristics of qualitative research focusing on the heterogeneity and complexity of samples. Shenyang Normal University (SNU) selected a sample of the typical comprehensive university in China that focuses on students’ learning and application of professional knowledge and trains professionals to different levels of expertise. A total of 22 current university students and 30 graduates who joined the Library Readers Association of SNU from 2011 to 2019 were selected for semi-structured interviews. Based on the data collected from these participating students, qualitative comparative analysis (QCA), including univariate necessity analysis and the multi-configuration analysis, was conducted. Findings and Discussion: QCA analysis results indicated that the influence of technology-enhanced university library restructures and reorganization on student learning experience and career readiness is the result of multiple factors. Technology-enhanced library equipment and other hardware restructured to meet the college students learning needs and have played an important role in improving the student learning experience and learning persistence. More importantly, the soft characteristics of technology-enhanced library reform, such as library service innovation space and culture space, have a positive impact on student’s career readiness and development. Technology-enhanced university library reform is not only the change in the building's appearance and facilities but also in library service quality and capability. The study also provides suggestions for policy, practice, and future research.Keywords: career readiness, college student learning experience, qualitative comparative analysis (QCA), technology-enhanced library reform
Procedia PDF Downloads 79296 Commercial Winding for Superconducting Cables and Magnets
Authors: Glenn Auld Knierim
Abstract:
Automated robotic winding of high-temperature superconductors (HTS) addresses precision, efficiency, and reliability critical to the commercialization of products. Today’s HTS materials are mature and commercially promising but require manufacturing attention. In particular to the exaggerated rectangular cross-section (very thin by very wide), winding precision is critical to address the stress that can crack the fragile ceramic superconductor (SC) layer and destroy the SC properties. Damage potential is highest during peak operations, where winding stress magnifies operational stress. Another challenge is operational parameters such as magnetic field alignment affecting design performance. Winding process performance, including precision, capability for geometric complexity, and efficient repeatability, are required for commercial production of current HTS. Due to winding limitations, current HTS magnets focus on simple pancake configurations. HTS motors, generators, MRI/NMR, fusion, and other projects are awaiting robotic wound solenoid, planar, and spherical magnet configurations. As with conventional power cables, full transposition winding is required for long length alternating current (AC) and pulsed power cables. Robotic production is required for transposition, periodic swapping of cable conductors, and placing into precise positions, which allows power utility required minimized reactance. A full transposition SC cable, in theory, has no transmission length limits for AC and variable transient operation due to no resistance (a problem with conventional cables), negligible reactance (a problem for helical wound HTS cables), and no long length manufacturing issues (a problem with both stamped and twisted stacked HTS cables). The Infinity Physics team is solving manufacturing problems by developing automated manufacturing to produce the first-ever reliable and utility-grade commercial SC cables and magnets. Robotic winding machines combine mechanical and process design, specialized sense and observer, and state-of-the-art optimization and control sequencing to carefully manipulate individual fragile SCs, especially HTS, to shape previously unattainable, complex geometries with electrical geometry equivalent to commercially available conventional conductor devices.Keywords: automated winding manufacturing, high temperature superconductor, magnet, power cable
Procedia PDF Downloads 140295 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City
Authors: Berhanu Keno Terfa
Abstract:
Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.
Procedia PDF Downloads 35294 “Japan’s New Security Outlook: Implications for the US-Japan Alliance”
Authors: Agustin Maciel-Padilla
Abstract:
This paper explores the most significant change to Japan’s security strategy since the end of World War II, in particular Prime Minister Fumio Kishida’s government publication, in late 2022, of 3 policy documents (the National Security Strategy [NSS], the National Defense Strategy and the Defense Buildup Program) that basically propose to expand the country’s military capabilities and to increase military spending over a 5-year period. These policies represent a remarkable transformation of Japan’s defense-oriented policy followed since 1946. These proposals have been under analysis and debate since they were announced, as it was also Japan’s historic ambition to strengthening its deterrence capabilities in the context of a more complex regional security environment. Even though this new defense posture has attracted significant international attention, it is far from representing a done deal because of the fact that there is still a long way to go to implement this vision because of a wide variety of political and economic issues. Japan is currently experiencing the most dangerous security environment since the end of World War II, and this situation led Japan to intensify its dialogue with the United States to reflect a re-evaluation of deterrence in the face of a rapidly worsening security environment, a changing balance of power in East Asia, and the arrival of a new era of “great power competition”. Japan’s new documents, for instance, identify China and North Korea’s as posing, respectively, a strategic challenge and an imminent threat. Japan has also noted that Russia’s invasion of Ukraine has contributed to erode the foundation of the international order. It is considered that Russia’s aggression was possible because Ukraine’s defense capability was not enough for effective deterrence. Moreover, Japan’s call for “counterstrike capabilities” results from a recognition that China and North Korea’s ballistic and cruise missiles could overwhelm Japan’s air and missile defense systems, and therefore there is an urgent need to strengthen deterrence and resilience. In this context, this paper will focus on the impact of these changes on the US-Japan alliance. Adapting this alliance to Tokyo’s new ambitions and capabilities could be critical in terms of updating their traditional protection/access to bases arrangement, interoperability and joint command and control issues, as well as regarding the security–economy nexus. While China is Japan’s largest trading partner, and trade between the two has been growing, US-Japan economic relationship has been slower, notwithstanding the fact that US-Japan security cooperation has strengthened significantly in recent years.Keywords: us-japan alliance, japan security, great power competition, interoperability
Procedia PDF Downloads 65293 Nano-Immunoassay for Diagnosis of Active Schistosomal Infection
Authors: Manal M. Kame, Hanan G. El-Baz, Zeinab A.Demerdash, Engy M. Abd El-Moneem, Mohamed A. Hendawy, Ibrahim R. Bayoumi
Abstract:
There is a constant need to improve the performance of current diagnostic assays of schistosomiasis as well as develop innovative testing strategies to meet new testing challenges. This study aims at increasing the diagnostic efficiency of monoclonal antibody (MAb)-based antigen detection assays through gold nanoparticles conjugated with specific anti-Schistosoma mansoni monoclonal antibodies. In this study, several hybidoma cell lines secreting MAbs against adult worm tegumental Schistosoma antigen (AWTA) were produced at Immunology Department of Theodor Bilharz Research Institute and preserved in liquid nitrogen. One MAb (6D/6F) was chosen for this study due to its high reactivity to schistosome antigens with highest optical density (OD) values. Gold nanoparticles (AuNPs) were functionalized and conjugated with MAb (6D/6F). The study was conducted on serum samples of 116 subjects: 71 patients with S. mansoni eggs in their stool samples group (gp 1), 25 with other parasites (gp2) and 20 negative healthy controls (gp3). Patients in gp1 were further subdivided according to egg count in their stool samples into Light infection {≤ 50 egg per gram(epg) (n= 17)}, moderate {51-100 epg (n= 33)} and severe infection {>100 epg(n= 21)}. Sandwich ELISA was performed using (AuNPs -MAb) for detection of circulating schistosomal antigen (CSA) levels in serum samples of all groups and the results were compared with that after using MAb/ sandwich ELISA system. Results Gold- MAb/ ELISA system reached a lower detection limit of 10 ng/ml compared to 85 ng/ml on using MAb/ ELISA and the optimal concentrations of AuNPs -MAb were found to be 12 folds less than that of MAb/ ELISA system for detection of CSA. The sensitivity and specificity of sandwich ELISA for detection of CSA levels using AuNPs -MAb were 100% & 97.8 % respectively compared to 87.3% &93.38% respectively on using MAb/ ELISA system. It was found that CSA was detected in 9 out of 71 S.mansoni infected patients on using AuNPs - MAb/ ELISA system and was not detected by MAb/ ELISA system. All those patients (9) was found to have an egg count below 50 epg feces (patients with light infections). ROC curve analyses revealed that sandwich ELISA using gold-MAb was an excellent diagnostic investigator that could differentiate Schistosoma patients from healthy controls, on the other hand it revealed that sandwich ELISA using MAb was not accurate enough as it could not recognize nine out of 71 patients with light infections. Conclusion Our data demonstrated that: Loading gold nanoparticles with MAb (6D/6F) increases the sensitivity and specificity of sandwich ELISA for detection of CSA, thus active (early) and light infections could be easily detected. Moreover this binding will decrease the amount of MAb consumed in the assay and lower the coast. The significant positive correlation that was detected between ova count (intensity of infection) and OD reading in sandwich ELISA using gold- MAb enables its use to detect the severity of infections and follow up patients after treatment for monitoring of cure.Keywords: Schistosomiasis, nanoparticles, gold, monoclonal antibodies, ELISA
Procedia PDF Downloads 371292 Inherent Difficulties in Countering Islamophobia
Authors: Imbesat Daudi
Abstract:
Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam
Procedia PDF Downloads 48291 Hydraulic Performance of Curtain Wall Breakwaters Based on Improved Moving Particle Semi-Implicit Method
Authors: Iddy Iddy, Qin Jiang, Changkuan Zhang
Abstract:
This paper addresses the hydraulic performance of curtain wall breakwaters as a coastal structure protection based on the particles method modelling. The hydraulic functions of curtain wall as wave barriers by reflecting large parts of incident waves through the vertical wall, a part transmitted and a particular part was dissipating the wave energies through the eddy flows formed beneath the lower end of the plate. As a Lagrangian particle, the Moving Particle Semi-implicit (MPS) method which has a robust capability for numerical representation has proven useful for design of structures application that concern free-surface hydrodynamic flow, such as wave breaking and overtopping. In this study, a vertical two-dimensional numerical model for the simulation of violent flow associated with the interaction between the curtain-wall breakwaters and progressive water waves is developed by MPS method in which a higher precision pressure gradient model and free surface particle recognition model were proposed. The wave transmission, reflection, and energy dissipation of the vertical wall were experimentally and theoretically examined. With the numerical wave flume by particle method, very detailed velocity and pressure fields around the curtain-walls under the action of waves can be computed in each calculation steps, and the effect of different wave and structural parameters on the hydrodynamic characteristics was investigated. Also, the simulated results of temporal profiles and distributions of velocity and pressure in the vicinity of curtain-wall breakwaters are compared with the experimental data. Herein, the numerical investigation of hydraulic performance of curtain wall breakwaters indicated that the incident wave is largely reflected from the structure, while the large eddies or turbulent flows occur beneath the curtain-wall resulting in big energy losses. The improved MPS method shows a good agreement between numerical results and analytical/experimental data which are compared to related researches. It is thus verified that the improved pressure gradient model and free surface particle recognition methods are useful for enhancement of stability and accuracy of MPS model for water waves and marine structures. Therefore, it is possible for particle method (MPS method) to achieve an appropriate level of correctness to be applied in engineering fields through further study.Keywords: curtain wall breakwaters, free surface flow, hydraulic performance, improved MPS method
Procedia PDF Downloads 149290 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction
Authors: Radul Shishkov, Orlin Davchev
Abstract:
The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction
Procedia PDF Downloads 63