Search results for: thermal cycling machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6336

Search results for: thermal cycling machine

1326 Artificial Intelligence in Melanoma Prognosis: A Narrative Review

Authors: Shohreh Ghasemi

Abstract:

Introduction: Melanoma is a complex disease with various clinical and histopathological features that impact prognosis and treatment decisions. Traditional methods of melanoma prognosis involve manual examination and interpretation of clinical and histopathological data by dermatologists and pathologists. However, the subjective nature of these assessments can lead to inter-observer variability and suboptimal prognostic accuracy. AI, with its ability to analyze vast amounts of data and identify patterns, has emerged as a promising tool for improving melanoma prognosis. Methods: A comprehensive literature search was conducted to identify studies that employed AI techniques for melanoma prognosis. The search included databases such as PubMed and Google Scholar, using keywords such as "artificial intelligence," "melanoma," and "prognosis." Studies published between 2010 and 2022 were considered. The selected articles were critically reviewed, and relevant information was extracted. Results: The review identified various AI methodologies utilized in melanoma prognosis, including machine learning algorithms, deep learning techniques, and computer vision. These techniques have been applied to diverse data sources, such as clinical images, dermoscopy images, histopathological slides, and genetic data. Studies have demonstrated the potential of AI in accurately predicting melanoma prognosis, including survival outcomes, recurrence risk, and response to therapy. AI-based prognostic models have shown comparable or even superior performance compared to traditional methods.

Keywords: artificial intelligence, melanoma, accuracy, prognosis prediction, image analysis, personalized medicine

Procedia PDF Downloads 65
1325 Modeling and Simulation of Ship Structures Using Finite Element Method

Authors: Javid Iqbal, Zhu Shifan

Abstract:

The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.

Keywords: dynamic analysis, finite element methods, ship structure, vibration analysis

Procedia PDF Downloads 125
1324 Reimagine and Redesign: Augmented Reality Digital Technologies and 21st Century Education

Authors: Jasmin Cowin

Abstract:

Augmented reality digital technologies, big data, and the need for a teacher workforce able to meet the demands of a knowledge-based society are poised to lead to major changes in the field of education. This paper explores applications and educational use cases of augmented reality digital technologies for educational organizations during the Fourth Industrial Revolution. The Fourth Industrial Revolution requires vision, flexibility, and innovative educational conduits by governments and educational institutions to remain competitive in a global economy. Educational organizations will need to focus on teaching in and for a digital age to continue offering academic knowledge relevant to 21st-century markets and changing labor force needs. Implementation of contemporary disciplines will need to be embodied through learners’ active knowledge-making experiences while embracing ubiquitous accessibility. The power of distributed ledger technology promises major streamlining for educational record-keeping, degree conferrals, and authenticity guarantees. Augmented reality digital technologies hold the potential to restructure educational philosophies and their underpinning pedagogies thereby transforming modes of delivery. Structural changes in education and governmental planning are already increasing through intelligent systems and big data. Reimagining and redesigning education on a broad scale is required to plan and implement governmental and institutional changes to harness innovative technologies while moving away from the big schooling machine.

Keywords: fourth industrial revolution, artificial intelligence, big data, education, augmented reality digital technologies, distributed ledger technology

Procedia PDF Downloads 267
1323 Applicability of Overhangs for Energy Saving in Existing High-Rise Housing in Different Climates

Authors: Qiong He, S. Thomas Ng

Abstract:

Upgrading the thermal performance of building envelope of existing residential buildings is an effective way to reduce heat gain or heat loss. Overhang device is a common solution for building envelope improvement as it can cut down solar heat gain and thereby can reduce the energy used for space cooling in summer time. Despite that, overhang can increase the demand for indoor heating in winter due to its function of lowering the solar heat gain. Obviously, overhang has different impacts on energy use in different climatic zones which have different energy demand. To evaluate the impact of overhang device on building energy performance under different climates of China, an energy analysis model is built up in a computer-based simulation program known as DesignBuilder based on the data of a typical high-rise residential building. The energy simulation results show that single overhang is able to cut down around 5% of the energy consumption of the case building in the stand-alone situation or about 2% when the building is surrounded by other buildings in regions which predominantly rely on space cooling though it has no contribution to energy reduction in cold region. In regions with cold summer and cold winter, adding overhang over windows can cut down around 4% and 1.8% energy use with and without adjoining buildings, respectively. The results indicate that overhang might not an effective shading device to reduce the energy consumption in the mixed climate or cold regions.

Keywords: overhang, energy analysis, computer-based simulation, design builder, high-rise residential building, climate, BIM model

Procedia PDF Downloads 341
1322 Simulation of the Collimator Plug Design for Prompt-Gamma Activation Analysis in the IEA-R1 Nuclear Reactor

Authors: Carlos G. Santos, Frederico A. Genezini, A. P. Dos Santos, H. Yorivaz, P. T. D. Siqueira

Abstract:

The Prompt-Gamma Activation Analysis (PGAA) is a valuable technique for investigating the elemental composition of various samples. However, the installation of a PGAA system entails specific conditions such as filtering the neutron beam according to the target and providing adequate shielding for both users and detectors. These requirements incur substantial costs, exceeding $100,000, including manpower. Nevertheless, a cost-effective approach involves leveraging an existing neutron beam facility to create a hybrid system integrating PGAA and Neutron Tomography (NT). The IEA-R1 nuclear reactor at IPEN/USP possesses an NT facility with suitable conditions for adapting and implementing a PGAA device. The NT facility offers a thermal flux slightly colder and provides shielding for user protection. The key additional requirement involves designing detector shielding to mitigate high gamma ray background and safeguard the HPGe detector from neutron-induced damage. This study employs Monte Carlo simulations with the MCNP6 code to optimize the collimator plug for PGAA within the IEA-R1 NT facility. Three collimator models are proposed and simulated to assess their effectiveness in shielding gamma and neutron radiation from nucleon fission. The aim is to achieve a focused prompt-gamma signal while shielding ambient gamma radiation. The simulation results indicate that one of the proposed designs is particularly suitable for the PGAA-NT hybrid system.

Keywords: MCNP6.1, neutron, prompt-gamma ray, prompt-gamma activation analysis

Procedia PDF Downloads 53
1321 The Imminent Other in Anna Deavere Smith’s Performance

Authors: Joy Shihyi Huang

Abstract:

This paper discusses the concept of community in Anna Deavere Smith’s performance, one that challenges and explores existing notions of justice and the other. In contrast to unwavering assumptions of essentialism that have helped to propel a discourse on moral agency within the black community, Smith employs postmodern ideas in which the theatrical attributes of doubling and repetition are conceptualized as part of what Marvin Carlson coined as a ‘memory machine.’ Her dismissal of the need for linear time, such as that regulated by Aristotle’s The Poetics and its concomitant ethics, values, and emotions as a primary ontological and epistemological construct produced by the existing African American historiography, demonstrates an urgency to produce an alternative communal self to override metanarratives in which the African Americans’ lives are contained and sublated by specific historical confines. Drawing on Emmanuel Levinas’ theories in ethics, specifically his notion of ‘proximity’ and ‘the third,’ the paper argues that Smith enacts a new model of ethics by launching an acting method that eliminates the boundary of self and other. Defying psychological realism, Smith conceptualizes an approach to acting that surpasses the mere mimetic value of invoking a ‘likeness’ of an actor to a character, which as such, resembles the mere attribution of various racial or sexual attributes in identity politics. Such acting, she contends, reduces the other to a representation of, at best, an ultimate rendering of me/my experience. She instead appreciates ‘unlikeness,’ recognizes the unavoidable actor/character gap as a power that humbles the self, whose irreversible journey to the other carves out its own image.

Keywords: Anna Deavere Smith, Emmanuel Levinas, other, performance

Procedia PDF Downloads 143
1320 Improving Similarity Search Using Clustered Data

Authors: Deokho Kim, Wonwoo Lee, Jaewoong Lee, Teresa Ng, Gun-Ill Lee, Jiwon Jeong

Abstract:

This paper presents a method for improving object search accuracy using a deep learning model. A major limitation to provide accurate similarity with deep learning is the requirement of huge amount of data for training pairwise similarity scores (metrics), which is impractical to collect. Thus, similarity scores are usually trained with a relatively small dataset, which comes from a different domain, causing limited accuracy on measuring similarity. For this reason, this paper proposes a deep learning model that can be trained with a significantly small amount of data, a clustered data which of each cluster contains a set of visually similar images. In order to measure similarity distance with the proposed method, visual features of two images are extracted from intermediate layers of a convolutional neural network with various pooling methods, and the network is trained with pairwise similarity scores which is defined zero for images in identical cluster. The proposed method outperforms the state-of-the-art object similarity scoring techniques on evaluation for finding exact items. The proposed method achieves 86.5% of accuracy compared to the accuracy of the state-of-the-art technique, which is 59.9%. That is, an exact item can be found among four retrieved images with an accuracy of 86.5%, and the rest can possibly be similar products more than the accuracy. Therefore, the proposed method can greatly reduce the amount of training data with an order of magnitude as well as providing a reliable similarity metric.

Keywords: visual search, deep learning, convolutional neural network, machine learning

Procedia PDF Downloads 204
1319 Predicting the Next Offensive Play Types will be Implemented to Maximize the Defense’s Chances of Success in the National Football League

Authors: Chris Schoborg, Morgan C. Wang

Abstract:

In the realm of the National Football League (NFL), substantial dedication of time and effort is invested by both players and coaches in meticulously analyzing the game footage of their opponents. The primary aim is to anticipate the actions of the opposing team. Defensive players and coaches are especially focused on deciphering their adversaries' intentions to effectively counter their strategies. Acquiring insights into the specific play type and its intended direction on the field would confer a significant competitive advantage. This study establishes pre-snap information as the cornerstone for predicting both the play type (e.g., deep pass, short pass, or run) and its spatial trajectory (right, left, or center). The dataset for this research spans the regular NFL season data for all 32 teams from 2013 to 2022. This dataset is acquired using the nflreadr package, which conveniently extracts play-by-play data from NFL games and imports it into the R environment as structured datasets. In this study, we employ a recently developed machine learning algorithm, XGBoost. The final predictive model achieves an impressive lift of 2.61. This signifies that the presented model is 2.61 times more effective than random guessing—a significant improvement. Such a model has the potential to markedly enhance defensive coaches' ability to formulate game plans and adequately prepare their players, thus mitigating the opposing offense's yardage and point gains.

Keywords: lift, NFL, sports analytics, XGBoost

Procedia PDF Downloads 47
1318 MXene-Based Self-Sensing of Damage in Fiber Composites

Authors: Latha Nataraj, Todd Henry, Micheal Wallock, Asha Hall, Christine Hatter, Babak Anasori, Yury Gogotsi

Abstract:

Multifunctional composites with enhanced strength and toughness for superior damage tolerance are essential for advanced aerospace and military applications. Detection of structural changes prior to visible damage may be achieved by incorporating fillers with tunable properties such as two-dimensional (2D) nanomaterials with high aspect ratios and more surface-active sites. While 2D graphene with large surface areas, good mechanical properties, and high electrical conductivity seems ideal as a filler, the single-atomic thickness can lead to bending and rolling during processing, requiring post-processing to bond to polymer matrices. Lately, an emerging family of 2D transition metal carbides and nitrides, MXenes, has attracted much attention since their discovery in 2011. Metallic electronic conductivity and good mechanical properties, even with increased polymer content, coupled with hydrophilicity make MXenes a good candidate as a filler material in polymer composites and exceptional as multifunctional damage indicators in composites. Here, we systematically study MXene-based (Ti₃C₂) coated on glass fibers for fiber reinforced polymer composite for self-sensing using microscopy and micromechanical testing. Further testing is in progress through the investigation of local variations in optical, acoustic, and thermal properties within the damage sites in response to strain caused by mechanical loading.

Keywords: damage sensing, fiber composites, MXene, self-sensing

Procedia PDF Downloads 112
1317 Perforation Analysis of the Aluminum Alloy Sheets Subjected to High Rate of Loading and Heated Using Thermal Chamber: Experimental and Numerical Approach

Authors: A. Bendarma, T. Jankowiak, A. Rusinek, T. Lodygowski, M. Klósak, S. Bouslikhane

Abstract:

The analysis of the mechanical characteristics and dynamic behavior of aluminum alloy sheet due to perforation tests based on the experimental tests coupled with the numerical simulation is presented. The impact problems (penetration and perforation) of the metallic plates have been of interest for a long time. Experimental, analytical as well as numerical studies have been carried out to analyze in details the perforation process. Based on these approaches, the ballistic properties of the material have been studied. The initial and residual velocities laser sensor is used during experiments to obtain the ballistic curve and the ballistic limit. The energy balance is also reported together with the energy absorbed by the aluminum including the ballistic curve and ballistic limit. The high speed camera helps to estimate the failure time and to calculate the impact force. A wide range of initial impact velocities from 40 up to 180 m/s has been covered during the tests. The mass of the conical nose shaped projectile is 28 g, its diameter is 12 mm, and the thickness of the aluminum sheet is equal to 1.0 mm. The ABAQUS/Explicit finite element code has been used to simulate the perforation processes. The comparison of the ballistic curve was obtained numerically and was verified experimentally, and the failure patterns are presented using the optimal mesh densities which provide the stability of the results. A good agreement of the numerical and experimental results is observed.

Keywords: aluminum alloy, ballistic behavior, failure criterion, numerical simulation

Procedia PDF Downloads 302
1316 Ghost Frequency Noise Reduction through Displacement Deviation Analysis

Authors: Paua Ketan, Bhagate Rajkumar, Adiga Ganesh, M. Kiran

Abstract:

Low gear noise is an important sound quality feature in modern passenger cars. Annoying gear noise from the gearbox is influenced by the gear design, gearbox shaft layout, manufacturing deviations in the components, assembly errors and the mounting arrangement of the complete gearbox. Geometrical deviations in the form of profile and lead errors are often present on the flanks of the inspected gears. Ghost frequencies of a gear are very challenging to identify in standard gear measurement and analysis process due to small wavelengths involved. In this paper, gear whine noise occurring at non-integral multiples of gear mesh frequency of passenger car gearbox is investigated and the root cause is identified using the displacement deviation analysis (DDA) method. DDA method is applied to identify ghost frequency excitations on the flanks of gears arising out of generation grinding. Frequency identified through DDA correlated with the frequency of vibration and noise on the end-of-line machine as well as vehicle level measurements. With the application of DDA method along with standard lead profile measurement, gears with ghost frequency geometry deviations were identified on the production line to eliminate defective parts and thereby eliminate ghost frequency noise from a vehicle. Further, displacement deviation analysis can be used in conjunction with the manufacturing process simulation to arrive at suitable countermeasures for arresting the ghost frequency.

Keywords: displacement deviation analysis, gear whine, ghost frequency, sound quality

Procedia PDF Downloads 133
1315 A Novel Machining Method and Tool-Path Generation for Bent Mandrel

Authors: Hong Lu, Yongquan Zhang, Wei Fan, Xiangang Su

Abstract:

Bent mandrel has been widely used as precise mould in automobile industry, shipping industry and aviation industry. To improve the versatility and efficiency of turning method of bent mandrel with fixed rotational center, an instantaneous machining model based on cutting parameters and machine dimension is prospered in this paper. The spiral-like tool path generation approach in non-axisymmetric turning process of bent mandrel is developed as well to deal with the error of part-to-part repeatability in existed turning model. The actual cutter-location points are calculated by cutter-contact points, which are obtained from the approach of spiral sweep process using equal-arc-length segment principle in polar coordinate system. The tool offset is set to avoid the interference between tool and work piece is also considered in the machining model. Depend on the spindle rotational angle, synchronization control of X-axis, Z-axis and C-axis is adopted to generate the tool-path of the turning process. The simulation method is developed to generate NC program according to the presented model, which includes calculation of cutter-location points and generation of tool-path of cutting process. With the approach of a bent mandrel taken as an example, the maximum offset of center axis is 4mm in the 3D space. Experiment results verify that the machining model and turning method are appropriate for the characteristics of bent mandrel.

Keywords: bent mandrel, instantaneous machining model, simulation method, tool-path generation

Procedia PDF Downloads 327
1314 Thermally Stable Nanocrystalline Aluminum Alloys Processed by Mechanical Alloying and High Frequency Induction Heat Sintering

Authors: Hany R. Ammar, Khalil A. Khalil, El-Sayed M. Sherif

Abstract:

The as-received metal powders were used to synthesis bulk nanocrystalline Al; Al-10%Cu; and Al-10%Cu-5%Ti alloys using mechanical alloying and high frequency induction heat sintering (HFIHS). The current study investigated the influence of milling time and ball-to-powder (BPR) weight ratio on the microstructural constituents and mechanical properties of the processed materials. Powder consolidation was carried out using a high frequency induction heat sintering where the processed metal powders were sintered into a dense and strong bulk material. The sintering conditions applied in this process were as follow: heating rate of 350°C/min; sintering time of 4 minutes; sintering temperature of 400°C; applied pressure of 750 Kgf/cm2 (100 MPa); cooling rate of 400°C/min and the process was carried out under vacuum of 10-3 Torr. The powders and the bulk samples were characterized using XRD and FEGSEM techniques. The mechanical properties were evaluated at various temperatures of 25°C, 100°C, 200°C, 300°C and 400°C to study the thermal stability of the processed alloys. The bulk nanocrystalline Al; Al-10%Cu; and Al-10%Cu-5%Ti alloys displayed extremely high hardness values even at elevated temperatures. The Al-10%Cu-5%Ti alloy displayed the highest hardness values at room and elevated temperatures which are related to the presence of Ti-containing phases such as Al3Ti and AlCu2Ti, these phases are thermally stable and retain the high hardness values at elevated temperatures up to 400ºC.

Keywords: nanocrystalline aluminum alloys, mechanical alloying, hardness, elevated temperatures

Procedia PDF Downloads 444
1313 Early Diagnosis and Treatment of Cancer Using Synthetic Cationic Peptide

Authors: D. J. Kalita

Abstract:

Cancer is one of the prime causes of early death worldwide. Mutation of the gene involve in DNA repair and damage, like BRCA2 (Breast cancer gene two) genes, can be detected efficiently by PCR-RFLP to early breast cancer diagnosis and adopt the suitable method of treatment. Host Defense Peptide can be used as blueprint for the design and synthesis of novel anticancer drugs to avoid the side effect of conventional chemotherapy and chemo resistance. The change at nucleotide position 392 of a -› c in the cancer sample of dog mammary tumour at BRCA2 (exon 7) gene lead the creation of a new restriction site for SsiI restriction enzyme. This SNP may be a marker for detection of canine mammary tumour. Support vector machine (SVM) algorithm was used to design and predict the anticancer peptide from the mature functional peptide. MTT assay of MCF-7 cell line after 48 hours of post treatment showed an increase in the number of rounded cells when compared with untreated control cells. The ability of the synthesized peptide to induce apoptosis in MCF-7 cells was further investigated by staining the cells with the fluorescent dye Hoechst stain solution, which allows the evaluation of the nuclear morphology. Numerous cells with dense, pyknotic nuclei (the brighter fluorescence) were observed in treated but not in control MCF-7 cells when viewed using an inverted phase-contrast microscope. Thus, PCR-RFLP is one of the attractive approach for early diagnosis, and synthetic cationic peptide can be used for the treatment of canine mammary tumour.

Keywords: cancer, cationic peptide, host defense peptides, Breast cancer genes

Procedia PDF Downloads 78
1312 A Collective Intelligence Approach to Safe Artificial General Intelligence

Authors: Craig A. Kaplan

Abstract:

If AGI proves to be a “winner-take-all” scenario where the first company or country to develop AGI dominates, then the first AGI must also be the safest. The safest, and fastest, path to Artificial General Intelligence (AGI) may be to harness the collective intelligence of multiple AI and human agents in an AGI network. This approach has roots in seminal ideas from four of the scientists who founded the field of Artificial Intelligence: Allen Newell, Marvin Minsky, Claude Shannon, and Herbert Simon. Extrapolating key insights from these founders of AI, and combining them with the work of modern researchers, results in a fast and safe path to AGI. The seminal ideas discussed are: 1) Society of Mind (Minsky), 2) Information Theory (Shannon), 3) Problem Solving Theory (Newell & Simon), and 4) Bounded Rationality (Simon). Society of Mind describes a collective intelligence approach that can be used with AI and human agents to create an AGI network. Information theory helps address the critical issue of how an AGI system will increase its intelligence over time. Problem Solving Theory provides a universal framework that AI and human agents can use to communicate efficiently, effectively, and safely. Bounded Rationality helps us better understand not only the capabilities of SuperIntelligent AGI but also how humans can remain relevant in a world where the intelligence of AGI vastly exceeds that of its human creators. Each key idea can be combined with recent work in the fields of Artificial Intelligence, Machine Learning, and Large Language Models to accelerate the development of a working, safe, AGI system.

Keywords: AI Agents, Collective Intelligence, Minsky, Newell, Shannon, Simon, AGI, AGI Safety

Procedia PDF Downloads 74
1311 Identification, Isolation and Characterization of Unknown Degradation Products of Cefprozil Monohydrate by HPTLC

Authors: Vandana T. Gawande, Kailash G. Bothara, Chandani O. Satija

Abstract:

The present research work was aimed to determine stability of cefprozil monohydrate (CEFZ) as per various stress degradation conditions recommended by International Conference on Harmonization (ICH) guideline Q1A (R2). Forced degradation studies were carried out for hydrolytic, oxidative, photolytic and thermal stress conditions. The drug was found susceptible for degradation under all stress conditions. Separation was carried out by using High Performance Thin Layer Chromatographic System (HPTLC). Aluminum plates pre-coated with silica gel 60F254 were used as the stationary phase. The mobile phase consisted of ethyl acetate: acetone: methanol: water: glacial acetic acid (7.5:2.5:2.5:1.5:0.5v/v). Densitometric analysis was carried out at 280 nm. The system was found to give compact spot for cefprozil monohydrate (0.45 Rf). The linear regression analysis data showed good linear relationship in the concentration range 200-5.000 ng/band for cefprozil monohydrate. Percent recovery for the drug was found to be in the range of 98.78-101.24. Method was found to be reproducible with % relative standard deviation (%RSD) for intra- and inter-day precision to be < 1.5% over the said concentration range. The method was validated for precision, accuracy, specificity and robustness. The method has been successfully applied in the analysis of drug in tablet dosage form. Three unknown degradation products formed under various stress conditions were isolated by preparative HPTLC and characterized by mass spectroscopic studies.

Keywords: cefprozil monohydrate, degradation products, HPTLC, stress study, stability indicating method

Procedia PDF Downloads 291
1310 Can the Intervention of SCAMPER Bring about Changes of Neural Activation While Taking Creativity Tasks?

Authors: Yu-Chu Yeh, WeiChin Hsu, Chih-Yen Chang

Abstract:

Substitution, combination, modification, putting to other uses, elimination, and rearrangement (SCAMPER) has been regarded as an effective technique that provides a structured way to help people to produce creative ideas and solutions. Although some neuroscience studies regarding creativity training have been conducted, no study has focused on SCAMPER. This study therefore aimed at examining whether the learning of SCAMPER through video tutorials would result in alternations of neural activation. Thirty college students were randomly assigned to the experimental group or the control group. The experimental group was requested to watch SCAMPER videos, whereas the control group was asked to watch natural-scene videos which were regarded as neutral stimulating materials. Each participant was brain scanned in a Functional magnetic resonance imaging (fMRI) machine while undertaking a creativity test before and after watching the videos. Furthermore, a two-way ANOVA was used to analyze the interaction between groups (the experimental group; the control group) and tasks (C task; M task; X task). The results revealed that the left precuneus significantly activated in the interaction of groups and tasks, as well as in the main effect of group. Furthermore, compared with the control group, the experimental group had greater activation in the default mode network (left precuneus and left inferior parietal cortex) and the motor network (left postcentral gyrus and left supplementary area). The findings suggest that the SCAMPER training may facilitate creativity through the stimulation of the default mode network and the motor network.

Keywords: creativity, default mode network, neural activation, SCAMPER

Procedia PDF Downloads 94
1309 Control of Airborne Aromatic Hydrocarbons over TiO2-Carbon Nanotube Composites

Authors: Joon Y. Lee, Seung H. Shin, Ho H. Chun, Wan K. Jo

Abstract:

Poly vinyl acetate (PVA)-based titania (TiO2)–carbon nanotube composite nanofibers (PVA-TCCNs) with various PVA-to-solvent ratios and PVA-based TiO2 composite nanofibers (PVA-TN) were synthesized using an electrospinning process, followed by thermal treatment. The photocatalytic activities of these nanofibers in the degradation of airborne monocyclic aromatics under visible-light irradiation were examined. This study focuses on the application of these photocatalysts to the degradation of the target compounds at sub-part-per-million indoor air concentrations. The characteristics of the photocatalysts were examined using scanning electron microscopy, X-ray diffraction, ultraviolet-visible spectroscopy, and Fourier-transform infrared spectroscopy. For all the target compounds, the PVA-TCCNs showed photocatalytic degradation efficiencies superior to those of the reference PVA-TN. Specifically, the average photocatalytic degradation efficiencies for benzene, toluene, ethyl benzene, and o-xylene (BTEX) obtained using the PVA-TCCNs with a PVA-to-solvent ratio of 0.3 (PVA-TCCN-0.3) were 11%, 59%, 89%, and 92%, respectively, whereas those observed using PVA-TNs were 5%, 9%, 28%, and 32%, respectively. PVA-TCCN-0.3 displayed the highest photocatalytic degradation efficiency for BTEX, suggesting the presence of an optimal PVA-to-solvent ratio for the synthesis of PVA-TCCNs. The average photocatalytic efficiencies for BTEX decreased from 11% to 4%, 59% to 18%, 89% to 37%, and 92% to 53%, respectively, when the flow rate was increased from 1.0 to 4.0 L min1. In addition, the average photocatalytic efficiencies for BTEX increased 11% to ~0%, 59% to 3%, 89% to 7%, and 92% to 13% , respectively, when the input concentration increased from 0.1 to 1.0 ppm. The prepared PVA-TCCNs were effective for the purification of airborne aromatics at indoor concentration levels, particularly when the operating conditions were optimized.

Keywords: mixing ratio, nanofiber, polymer, reference photocatalyst

Procedia PDF Downloads 364
1308 Fluid-Structure Interaction Study of Fluid Flow past Marine Turbine Blade Designed by Using Blade Element Theory and Momentum Theory

Authors: Abu Afree Andalib, M. Mezbah Uddin, M. Rafiur Rahman, M. Abir Hossain, Rajia Sultana Kamol

Abstract:

This paper deals with the analysis of flow past the marine turbine blade which is designed by using the blade element theory and momentum theory for the purpose of using in the field of renewable energy. The designed blade is analyzed for various parameters using FSI module of Ansys. Computational Fluid Dynamics is used for the study of fluid flow past the blade and other fluidic phenomena such as lift, drag, pressure differentials, energy dissipation in water. Finite Element Analysis (FEA) module of Ansys was used to analyze the structural parameter such as stress and stress density, localization point, deflection, force propagation. Fine mesh is considered in every case for more accuracy in the result according to computational machine power. The relevance of design, search and optimization with respect to complex fluid flow and structural modeling is considered and analyzed. The relevancy of design and optimization with respect to complex fluid for minimum drag force using Ansys Adjoint Solver module is analyzed as well. The graphical comparison of the above-mentioned parameter using CFD and FEA and subsequently FSI technique is illustrated and found the significant conformity between both the results.

Keywords: blade element theory, computational fluid dynamics, finite element analysis, fluid-structure interaction, momentum theory

Procedia PDF Downloads 286
1307 Enhancement of Natural Convection Heat Transfer within Closed Enclosure Using Parallel Fins

Authors: F. A. Gdhaidh, K. Hussain, H. S. Qi

Abstract:

A numerical study of natural convection heat transfer in water filled cavity has been examined in 3D for single phase liquid cooling system by using an array of parallel plate fins mounted to one wall of a cavity. The heat generated by a heat source represents a computer CPU with dimensions of 37.5×37.5 mm mounted on substrate. A cold plate is used as a heat sink installed on the opposite vertical end of the enclosure. The air flow inside the computer case is created by an exhaust fan. A turbulent air flow is assumed and k-ε model is applied. The fins are installed on the substrate to enhance the heat transfer. The applied power energy range used is between 15- 40W. In order to determine the thermal behaviour of the cooling system, the effect of the heat input and the number of the parallel plate fins are investigated. The results illustrate that as the fin number increases the maximum heat source temperature decreases. However, when the fin number increases to critical value the temperature start to increase due to the fins are too closely spaced and that cause the obstruction of water flow. The introduction of parallel plate fins reduces the maximum heat source temperature by 10% compared to the case without fins. The cooling system maintains the maximum chip temperature at 64.68℃ when the heat input was at 40 W which is much lower than the recommended computer chips limit temperature of no more than 85℃ and hence the performance of the CPU is enhanced.

Keywords: chips limit temperature, closed enclosure, natural convection, parallel plate, single phase liquid

Procedia PDF Downloads 256
1306 Bulk/Hull Cavitation Induced by Underwater Explosion: Effect of Material Elasticity and Surface Curvature

Authors: Wenfeng Xie

Abstract:

Bulk/hull cavitation evolution induced by an underwater explosion (UNDEX) near a free surface (bulk) or a deformable structure (hull) is numerically investigated using a multiphase compressible fluid solver coupled with a one-fluid cavitation model. A series of two-dimensional computations is conducted with varying material elasticity and surface curvature. Results suggest that material elasticity and surface curvature influence the peak pressures generated from UNDEX shock and cavitation collapse, as well as the bulk/hull cavitation regions near the surface. Results also show that such effects can be different for bulk cavitation generated from UNDEX-free surface interaction and for hull cavitation generated from UNDEX-structure interaction. More importantly, results demonstrate that shock wave focusing caused by a concave solid surface can lead to a larger cavitation region and thus intensify the cavitation reload. The findings can be linked to the strength and the direction of reflected waves from the structural surface and reflected waves from the expanding bubble surface, which are functions of material elasticity and surface curvature. Shockwave focusing effects are also observed for axisymmetric simulations, but the strength of the pressure contours for the axisymmetric simulations is less than those for the 2D simulations due to the difference between the initial shock energy. The current method is limited to two-dimensional or axisymmetric applications. Moreover, the thermal effects are neglected and the liquid is not allowed to sustain tension in the cavitation model.

Keywords: cavitation, UNDEX, fluid-structure interaction, multiphase

Procedia PDF Downloads 172
1305 Change Detection Analysis on Support Vector Machine Classifier of Land Use and Land Cover Changes: Case Study on Yangon

Authors: Khin Mar Yee, Mu Mu Than, Kyi Lint, Aye Aye Oo, Chan Mya Hmway, Khin Zar Chi Winn

Abstract:

The dynamic changes of Land Use and Land Cover (LULC) changes in Yangon have generally resulted the improvement of human welfare and economic development since the last twenty years. Making map of LULC is crucially important for the sustainable development of the environment. However, the exactly data on how environmental factors influence the LULC situation at the various scales because the nature of the natural environment is naturally composed of non-homogeneous surface features, so the features in the satellite data also have the mixed pixels. The main objective of this study is to the calculation of accuracy based on change detection of LULC changes by Support Vector Machines (SVMs). For this research work, the main data was satellite images of 1996, 2006 and 2015. Computing change detection statistics use change detection statistics to compile a detailed tabulation of changes between two classification images and Support Vector Machines (SVMs) process was applied with a soft approach at allocation as well as at a testing stage and to higher accuracy. The results of this paper showed that vegetation and cultivated area were decreased (average total 29 % from 1996 to 2015) because of conversion to the replacing over double of the built up area (average total 30 % from 1996 to 2015). The error matrix and confidence limits led to the validation of the result for LULC mapping.

Keywords: land use and land cover change, change detection, image processing, support vector machines

Procedia PDF Downloads 120
1304 Global Low Carbon Transitions in the Power Sector: A Machine Learning Archetypical Clustering Approach

Authors: Abdullah Alotaiq, David Wallom, Malcolm McCulloch

Abstract:

This study presents an archetype-based approach to designing effective strategies for low-carbon transitions in the power sector. To achieve global energy transition goals, a renewable energy transition is critical, and understanding diverse energy landscapes across different countries is essential to design effective renewable energy policies and strategies. Using a clustering approach, this study identifies 12 energy archetypes based on the electricity mix, socio-economic indicators, and renewable energy contribution potential of 187 UN countries. Each archetype is characterized by distinct challenges and opportunities, ranging from high dependence on fossil fuels to low electricity access, low economic growth, and insufficient contribution potential of renewables. Archetype A, for instance, consists of countries with low electricity access, high poverty rates, and limited power infrastructure, while Archetype J comprises developed countries with high electricity demand and installed renewables. The study findings have significant implications for renewable energy policymaking and investment decisions, with policymakers and investors able to use the archetype approach to identify suitable renewable energy policies and measures and assess renewable energy potential and risks. Overall, the archetype approach provides a comprehensive framework for understanding diverse energy landscapes and accelerating decarbonisation of the power sector.

Keywords: fossil fuels, power plants, energy transition, renewable energy, archetypes

Procedia PDF Downloads 41
1303 Effect of Rice Husk Ash and Metakaolin on the Compressive Strengths of Ternary Cement Mortars

Authors: Olubajo Olumide Olu

Abstract:

This paper studies the effect of Metakaolin (MK) and Rice husk ash (RHA) on the compressive strength of ternary cement mortar at replacement level up to 30%. The compressive strength test of the blended cement mortars were conducted using Tonic Technic compression and machine. Nineteen ternary cement mortars were prepared comprising of ordinary Portland cement (OPC), Rice husk ash (RHA) and Metakaolin (MK) at different proportion. Ternary mortar prisms in which Portland cement was replaced by up to 30% were tested at various age; 2, 7, 28 and 60 days. Result showed that the compressive strength of the cement mortars increased as the curing days were lengthened for both OPC and the blended cement samples. The ternary cement’s compressive strengths showed significant improvement compared with the control especially beyond 28 days. This can be attributed to the slow pozzolanic reaction resulting from the formation of additional CSH from the interaction of the residual CH content and the silica available in the Metakaolin and Rice husk ash, thus providing significant strength gain at later age. Results indicated that the addition of metakaolin with rice husk ash kept constant was found to lead to an increment in the compressive strength. This can either be attributed to the high silica/alumina contribution to the matrix or the C/S ratio in the cement matrix. Whereas, increment in the rice husk ash content while metakaolin was held constant led to an increment in the compressive strength, which could be attributed to the reactivity of the rice husk ash followed by decrement owing to the presence of unburnt carbon in the RHA matrix. The best compressive strength results were obtained at 10% cement replacement (5% RHA, 5% MK); 15% cement replacement (10% MK and 5% RHA); 20% cement replacement (15% MK and 5% RHA); 25% cement replacement (20% MK and 5% RHA); 30% cement replacement (10%/20% MK and 20%/10% RHA). With the optimal combination of either 15% and 20% MK with 5% RHA giving the best compressive strength of 40.5MPa.

Keywords: metakaolin, rice husk ash, compressive strength, ternary mortar, curing days

Procedia PDF Downloads 329
1302 Biodegradable Polymer Film Incorporated with Polyphenols for Active Packaging

Authors: Shubham Sharma, Swarna Jaiswal, Brendan Duffy, Amit Jaiswal

Abstract:

The key features of any active packaging film are its biodegradability and antimicrobial properties. Biological macromolecules such as polyphenols (ferulic acid (FA) and tannic acids (TA)) are naturally found in plants such as grapes, berries, and tea. In this study, antimicrobial activity screening of several polyphenols was carried out by using minimal inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) against two strains of gram-negative bacteria - Salmonella typhimurium, Escherichia coli, and two-gram positive strains - Staphylococcus aureus and Listeria monocytogenes. FA and TA had shown strong antibacterial activity at the low concentration against both gram-positive and gram-negative bacteria. The selected polyphenols FA and TA were incorporated at various concentrations (1%, 5%, and 10% w/w) in the poly(lactide) – poly (butylene adipate-co-terephthalate) (PLA-PBAT) composite film by using the solvent casting method. The effect of TA and FA incorporation in the packaging was characterized based on morphological, optical, color, mechanical, thermal, and antimicrobial properties. The thickness of the FA composite film was increased by 1.5 – 7.2%, while for TA composite film, it increased by 0.018 – 1.6%. FA and TA (10 wt%) composite film had shown approximately 65% - 66% increase in the UV barrier property. As the FA and TA concentration increases from 1% - 10% (w/w), the TS value increases by 1.98 and 1.80 times, respectively. The water contact angle of the film was observed to decrease significantly with the increase in the FA and TA content in the composite film. FA has shown more significant increase in antimicrobial activity than TA in the composite film against Listeria monocytogenes and E. coli. The FA and TA composite film has the potential for its application as an active food packaging.

Keywords: active packaging, biodegradable film, polyphenols, UV barrier, tensile strength

Procedia PDF Downloads 142
1301 Design of SAE J2716 Single Edge Nibble Transmission Digital Sensor Interface for Automotive Applications

Authors: Jongbae Lee, Seongsoo Lee

Abstract:

Modern sensors often embed small-size digital controller for sensor control, value calibration, and signal processing. These sensors require digital data communication with host microprocessors, but conventional digital communication protocols are too heavy for price reduction. SAE J2716 SENT (single edge nibble transmission) protocol transmits direct digital waveforms instead of complicated analog modulated signals. In this paper, a SENT interface is designed in Verilog HDL (hardware description language) and implemented in FPGA (field-programmable gate array) evaluation board. The designed SENT interface consists of frame encoder/decoder, configuration register, tick period generator, CRC (cyclic redundancy code) generator/checker, and TX/RX (transmission/reception) buffer. Frame encoder/decoder is implemented as a finite state machine, and it controls whole SENT interface. Configuration register contains various parameters such as operation mode, tick length, CRC option, pause pulse option, and number of nibble data. Tick period generator generates tick signals from input clock. CRC generator/checker generates or checks CRC in the SENT data frame. TX/RX buffer stores transmission/received data. The designed SENT interface can send or receives digital data in 25~65 kbps at 3 us tick. Synthesized in 0.18 um fabrication technologies, it is implemented about 2,500 gates.

Keywords: digital sensor interface, SAE J2716, SENT, verilog HDL

Procedia PDF Downloads 287
1300 Using Wearable Device with Neuron Network to Classify Severity of Sleep Disorder

Authors: Ru-Yin Yang, Chi Wu, Cheng-Yu Tsai, Yin-Tzu Lin, Wen-Te Liu

Abstract:

Background: Sleep breathing disorder (SDB) is a condition demonstrated by recurrent episodes of the airway obstruction leading to intermittent hypoxia and quality fragmentation during sleep time. However, the procedures for SDB severity examination remain complicated and costly. Objective: The objective of this study is to establish a simplified examination method for SDB by the respiratory impendence pattern sensor combining the signal processing and machine learning model. Methodologies: We records heart rate variability by the electrocardiogram and respiratory pattern by impendence. After the polysomnography (PSG) been done with the diagnosis of SDB by the apnea and hypopnea index (AHI), we calculate the episodes with the absence of flow and arousal index (AI) from device record. Subjects were divided into training and testing groups. Neuron network was used to establish a prediction model to classify the severity of the SDB by the AI, episodes, and body profiles. The performance was evaluated by classification in the testing group compared with PSG. Results: In this study, we enrolled 66 subjects (Male/Female: 37/29; Age:49.9±13.2) with the diagnosis of SDB in a sleep center in Taipei city, Taiwan, from 2015 to 2016. The accuracy from the confusion matrix on the test group by NN is 71.94 %. Conclusion: Based on the models, we established a prediction model for SDB by means of the wearable sensor. With more cases incoming and training, this system may be used to rapidly and automatically screen the risk of SDB in the future.

Keywords: sleep breathing disorder, apnea and hypopnea index, body parameters, neuron network

Procedia PDF Downloads 132
1299 A Controlled Natural Language Assisted Approach for the Design and Automated Processing of Service Level Agreements

Authors: Christopher Schwarz, Katrin Riegler, Erwin Zinser

Abstract:

The management of outsourcing relationships between IT service providers and their customers proofs to be a critical issue that has to be stipulated by means of Service Level Agreements (SLAs). Since service requirements differ from customer to customer, SLA content and language structures vary largely, standardized SLA templates may not be used and an automated processing of SLA content is not possible. Hence, SLA management is usually a time-consuming and inefficient manual process. For overcoming these challenges, this paper presents an innovative and ITIL V3-conform approach for automated SLA design and management using controlled natural language in enterprise collaboration portals. The proposed novel concept is based on a self-developed controlled natural language that follows a subject-predicate-object approach to specify well-defined SLA content structures that act as templates for customized contracts and support automated SLA processing. The derived results eventually enable IT service providers to automate several SLA request, approval and negotiation processes by means of workflows and business rules within an enterprise collaboration portal. The illustrated prototypical realization gives evidence of the practical relevance in service-oriented scenarios as well as the high flexibility and adaptability of the presented model. Thus, the prototype enables the automated creation of well defined, customized SLA documents, providing a knowledge representation that is both human understandable and machine processable.

Keywords: automated processing, controlled natural language, knowledge representation, information technology outsourcing, service level management

Procedia PDF Downloads 419
1298 Status of Production, Distribution and Determinants of Biomass Briquette Acceptability in Kampala, Uganda

Authors: David B. Kisakye, Paul Mugabi

Abstract:

Biomass briquettes have been identified as a plausible and close alternative to commonly used energy fuels such as charcoal and firewood, whose prices are escalating due to the dwindling natural resource base. However, briquettes do not seem to be as popular as would be expected. This study assessed the production, distribution, and acceptability of the briquettes in the Kampala district. A total of 60 respondents, 50 of whom were briquette users and 10 briquette producers, were sampled from five divisions of Kampala district to evaluate consumer acceptability, preference for briquette type and shape. Households and institutions were identified to be the major consumers of briquettes, while community-based organizations were the major distributors of briquettes. The Chi-square test of independence showed a significant association between briquette acceptability and briquette attributes of substitutability and low cost (p < 0,05). The Kruskal Wallis test showed that low-income class people preferred non-carbonized briquettes. Gender, marital status, and income level also cause variation in preference for spherical, stick, and honeycomb briquettes (p < 0,05). The major challenges faced by briquette users in Kampala were; production of a lot of ash, frequent crushing, and limited access to briquettes. The producers of briquettes were mainly challenged by regular machine breakdown, raw material scarcity, and poor carbonizing units. It was concluded that briquettes have a market and are generally accepted in Kampala. However, user preferences need to be taken into account by briquette produces, suitable cookstoves should be availed to users, and there is a need for standards to ensure the quality of briquettes.

Keywords: consumer acceptability, biomass residues, briquettes, briquette producers, distribution, fuel, marketability, wood fuel

Procedia PDF Downloads 128
1297 Hydrogen Production Through Thermocatalytic Decomposition of Methane Over Biochar

Authors: Seyed Mohamad Rasool Mirkarimi, David Chiaramonti, Samir Bensaid

Abstract:

Catalytic methane decomposition (CMD, reaction 4) is a one-step process for hydrogen production where carbon in the methane molecule is sequestered in the form of stable and higher-value carbon materials. Metallic catalysts and carbon-based catalysts are two major types of catalysts utilized for the CDM process. Although carbon-based catalysts have lower activity compared to metallic ones, they are less expensive and offer high thermal stability and strong resistance to chemical impurities such as sulfur. Also, it would require less costly separation methods as some of the carbon-based catalysts may not have an active metal component in them. Since the regeneration of metallic catalysts requires burning of the C on their surfaces, which emits CO/CO2, in some cases, using carbon-based catalysts would be recommended because regeneration can be completely avoided, and the catalyst can be directly used in other processes. This work focuses on the effect of biochar as a carbon-based catalyst for the conversion of methane into hydrogen and carbon. Biochar produced from the pyrolysis of poplar wood and activated biochar are used as catalysts for this process. In order to observe the impact of carbon-based catalysts on methane conversion, methane cracking in the absence and presence of catalysts for a gas stream with different levels of methane concentration should be performed. The results of these experiments prove conversion of methane in the absence of catalysts at 900 °C is negligible, whereas in the presence of biochar and activated biochar, significant growth has been observed. Comparing the results of the tests related to using char and activated char shows the enhancement obtained in BET surface area of the catalyst through activation leads to more than 10 vol.% methane conversion.

Keywords: hydrogen production, catalytic methane decomposition, biochar, activated biochar, carbon-based catalyts

Procedia PDF Downloads 69