Search results for: ordinal response models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11302

Search results for: ordinal response models

7432 A CMOS Capacitor Array for ESPAR with Fast Switching Time

Authors: Jin-Sup Kim, Se-Hwan Choi, Jae-Young Lee

Abstract:

A 8-bit CMOS capacitor array is designed for using in electrically steerable passive array radiator (ESPAR). The proposed capacitor array shows the fast response time in rising and falling characteristics. Compared to other works in silicon-on-insulator (SOI) or silicon-on-sapphire (SOS) technologies, it shows a comparable tuning range and switching time with low power consumption. Using the 0.18um CMOS, the capacitor array features a tuning range of 1.5 to 12.9 pF at 2.4GHz. Including the 2X4 decoder for control interface, the Chip size is 350um X 145um. Current consumption is about 80 nA at 1.8 V operation.

Keywords: CMOS capacitor array, ESPAR, SOI, SOS, switching time

Procedia PDF Downloads 585
7431 Comprehensive Longitudinal Multi-omic Profiling in Weight Gain and Insulin Resistance

Authors: Christine Y. Yeh, Brian D. Piening, Sarah M. Totten, Kimberly Kukurba, Wenyu Zhou, Kevin P. F. Contrepois, Gucci J. Gu, Sharon Pitteri, Michael Snyder

Abstract:

Three million deaths worldwide are attributed to obesity. However, the biomolecular mechanisms that describe the link between adiposity and subsequent disease states are poorly understood. Insulin resistance characterizes approximately half of obese individuals and is a major cause of obesity-mediated diseases such as Type II diabetes, hypertension and other cardiovascular diseases. This study makes use of longitudinal quantitative and high-throughput multi-omics (genomics, epigenomics, transcriptomics, glycoproteomics etc.) methodologies on blood samples to develop multigenic and multi-analyte signatures associated with weight gain and insulin resistance. Participants of this study underwent a 30-day period of weight gain via excessive caloric intake followed by a 60-day period of restricted dieting and return to baseline weight. Blood samples were taken at three different time points per patient: baseline, peak-weight and post weight loss. Patients were characterized as either insulin resistant (IR) or insulin sensitive (IS) before having their samples processed via longitudinal multi-omic technologies. This comparative study revealed a wealth of biomolecular changes associated with weight gain after using methods in machine learning, clustering, network analysis etc. Pathways of interest included those involved in lipid remodeling, acute inflammatory response and glucose metabolism. Some of these biomolecules returned to baseline levels as the patient returned to normal weight whilst some remained elevated. IR patients exhibited key differences in inflammatory response regulation in comparison to IS patients at all time points. These signatures suggest differential metabolism and inflammatory pathways between IR and IS patients. Biomolecular differences associated with weight gain and insulin resistance were identified on various levels: in gene expression, epigenetic change, transcriptional regulation and glycosylation. This study was not only able to contribute to new biology that could be of use in preventing or predicting obesity-mediated diseases, but also matured novel biomedical informatics technologies to produce and process data on many comprehensive omics levels.

Keywords: insulin resistance, multi-omics, next generation sequencing, proteogenomics, type ii diabetes

Procedia PDF Downloads 424
7430 Implementation of Statistical Parameters to Form an Entropic Mathematical Models

Authors: Gurcharan Singh Buttar

Abstract:

It has been discovered that although these two areas, statistics, and information theory, are independent in their nature, they can be combined to create applications in multidisciplinary mathematics. This is due to the fact that where in the field of statistics, statistical parameters (measures) play an essential role in reference to the population (distribution) under investigation. Information measure is crucial in the study of ambiguity, assortment, and unpredictability present in an array of phenomena. The following communication is a link between the two, and it has been demonstrated that the well-known conventional statistical measures can be used as a measure of information.

Keywords: probability distribution, entropy, concavity, symmetry, variance, central tendency

Procedia PDF Downloads 150
7429 Developing Metaverse Initiatives: Insights from a University Case Study

Authors: Jiongbin Liu, William Yeoh, Shang Gao, Xiaoliang Meng, Yuhan Zhu

Abstract:

The metaverse concept has sparked significant interest in both academic and industrial spheres. As educational institutions increasingly adopt this technology, understanding its implementation becomes crucial. In response, we conducted a comprehensive case study at a large university, systematically analyzing the nine stages of metaverse development initiatives. Our study unveiled critical insights into the planning, assessment, and execution processes, offering invaluable guidance for stakeholders. The findings highlight both the opportunities for enhanced learning experiences and the challenges related to technological integration and social interaction in higher education.

Keywords: metaverse, metaverse development framework, higher education, case study

Procedia PDF Downloads 24
7428 Digital Architectural Practice as a Challenge for Digital Architectural Technology Elements in the Era of Digital Design

Authors: Ling Liyun

Abstract:

In the field of contemporary architecture, complex forms of architectural works continue to emerge in the world, along with some new terminology emerged: digital architecture, parametric design, algorithm generation, building information modeling, CNC construction and so on. Architects gradually mastered the new skills of mathematical logic in the form of exploration, virtual simulation, and the entire design and coordination in the construction process. Digital construction technology has a greater degree in controlling construction, and ensure its accuracy, creating a series of new construction techniques. As a result, the use of digital technology is an improvement and expansion of the practice of digital architecture design revolution. We worked by reading and analyzing information about the digital architecture development process, a large number of cases, as well as architectural design and construction as a whole process. Thus current developments were introduced and discussed in our paper, such as architectural discourse, design theory, digital design models and techniques, material selecting, as well as artificial intelligence space design. Our paper also pays attention to the representative three cases of digital design and construction experiment at great length in detail to expound high-informatization, high-reliability intelligence, and high-technique in constructing a humane space to cope with the rapid development of urbanization. We concluded that the opportunities and challenges of the shift existed in architectural paradigms, such as the cooperation methods, theories, models, technologies and techniques which were currently employed in digital design research and digital praxis. We also find out that the innovative use of space can gradually change the way people learn, talk, and control information. The past two decades, digital technology radically breaks the technology constraints of industrial technical products, digests the publicity on a particular architectural style (era doctrine). People should not adapt to the machine, but in turn, it’s better to make the machine work for users.

Keywords: artificial intelligence, collaboration, digital architecture, digital design theory, material selection, space construction

Procedia PDF Downloads 131
7427 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data

Authors: Kai Warsoenke, Maik Mackiewicz

Abstract:

To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.

Keywords: automotive production, machine learning, process optimization, smart tolerancing

Procedia PDF Downloads 104
7426 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data

Authors: Jian-Heng Wu, Bor-Shen Lin

Abstract:

The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.

Keywords: water mass, Gaussian mixture model, data visualization, system framework

Procedia PDF Downloads 135
7425 A Review of Geotextile Tube with the Evaluation of Dewatering of High Water Content Sludge

Authors: Rajul Dwivedi, Mahesh Patel

Abstract:

Due to the scarcity of natural resources, common rivers and coastal structures are too expensive to build and maintain. One such method is to use geotextile tube technology to build marine protected structures, such as dams, canals, jetties, free breakwaters, etc. Geotextile tube technology has evolved from other construction technologies and improved into a more efficient solution. The coastal erosion problems have been exacerbated by the development of infrastructure associated with the expansion of urban and industrial activities. Resources and harbours and the removal of sea sand for use in this erosion event will accelerate the erosion of the sea. but in the coastal areas, due to depletion of sand or beach sand

Keywords: geotextile tubes, slurry, dewatering, response surface

Procedia PDF Downloads 131
7424 Relevancy Measures of Errors in Displacements of Finite Elements Analysis Results

Authors: A. B. Bolkhir, A. Elshafie, T. K. Yousif

Abstract:

This paper highlights the methods of error estimation in finite element analysis (FEA) results. It indicates that the modeling error could be eliminated by performing finite element analysis with successively finer meshes or by extrapolating response predictions from an orderly sequence of relatively low degree of freedom analysis results. In addition, the paper eliminates the round-off error by running the code at a higher precision. The paper provides application in finite element analysis results. It draws a conclusion based on results of application of methods of error estimation.

Keywords: finite element analysis (FEA), discretization error, round-off error, mesh refinement, richardson extrapolation, monotonic convergence

Procedia PDF Downloads 488
7423 The Buccal Fat Pad for Closure of Oroantral Communication

Authors: Stefano A. Denes, Riccardo Tieghi, Giovanni Elia

Abstract:

The buccal fat pad is a well-established tool in oral and maxillofacial surgery and its use has proved of value for the closure of oroantral communications. Oroantral communication may be a common complication after sequestrectomy in "Bisphosphonate-related osteonecrosis of the jaws". We report a clinical case of a 70-year-old female patient in bisphosphonate therapy presented with right maxillary sinusitis and oroantral communication after implants insertion. The buccal fat pad was used to close the defect. The case had an uneventful postoperative healing without dehiscence, infection and necrosis. We postulate that the primary closure of the site with buccal fat pad may ensure a sufficient blood supply and adequate protection for an effective bone-healing response to occur.

Keywords: buccal fat pad, oroantral communication, oral surgery, dehiscence

Procedia PDF Downloads 343
7422 A Broadband Tri-Cantilever Vibration Energy Harvester with Magnetic Oscillator

Authors: Xiaobo Rui, Zhoumo Zeng, Yibo Li

Abstract:

A novel tri-cantilever energy harvester with magnetic oscillator was presented, which could convert the ambient vibration into electrical energy to power the low-power devices such as wireless sensor networks. The most common way to harvest vibration energy is based on the use of linear resonant devices such as cantilever beam, since this structure creates the highest strain for a given force. The highest efficiency will be achieved when the resonance frequency of the harvester matches the vibration frequency. The limitation of the structure is the narrow effective bandwidth. To overcome this limitation, this article introduces a broadband tri-cantilever harvester with nonlinear stiffness. This energy harvester typically consists of three thin cantilever beams vertically arranged with Neodymium Magnets ( NdFeB)magnetics at its free end and a fixed base at the other end. The three cantilevers have different resonant frequencies by designed in different thicknesses. It is obviously that a similar advantage of multiple resonant frequencies as piezoelectric cantilevers array structure is built. To achieve broadband energy harvesting, magnetic interaction is used to introduce the nonlinear system stiffness to tune the resonant frequency to match the excitation. Since the three cantilever tips are all free and the magnetic force is distance dependent, the resonant frequencies will be complexly changed with the vertical vibration of the free end. Both model and experiment are built. The electromechanically coupled lumped-parameter model is presented. An electromechanical formulation and analytical expressions for the coupled nonlinear vibration response and voltage response are given. The entire structure is fabricated and mechanically attached to a electromagnetic shaker as a vibrating body via the fixed base, in order to couple the vibrations to the cantilever. The cantilevers are bonded with piezoelectric macro-fiber composite (MFC) materials (Model: M8514P2). The size of the cantilevers is 120*20mm2 and the thicknesses are separately 1mm, 0.8mm, 0.6mm. The prototype generator has a measured performance of 160.98 mW effective electrical power and 7.93 DC output voltage via the excitation level of 10m/s2. The 130% increase in the operating bandwidth is achieved. This device is promising to support low-power devices, peer-to-peer wireless nodes, and small-scale wireless sensor networks in ambient vibration environment.

Keywords: tri-cantilever, ambient vibration, energy harvesting, magnetic oscillator

Procedia PDF Downloads 150
7421 Functioning of a Temporarily Single Parent Family System Due to Migration from the Perspective of Adolescents with Cerebral Palsy

Authors: A. Gagat-Matuła

Abstract:

There is a definite lack – in Poland, as well as around the world – of empirical studies of families raising handicapped child, in which one parent migrates. In diagnostics of the functioning of such families emphasis should be placed not only on the difficulties, but most of all it should be indicated what possibilities are there for the family and how it overcomes the difficulties. Migration of a parent on the one hand is a chance to improve the family’s material situation. In certain circumstances this may only be an “escape” into work from the issues associated with the upbringing and rehabilitation of a handicapped child. The aim of the study was to learn the functioning of a temporarily single parent family system as a result of migration of a parent from the perspective of adolescents with cerebral palsy. The study was conducted in the year 2013 in the area of Eastern Poland. It involved an analysis of 70 persons (with cerebral palsy in an intellectual capacity) from families in which at least one of the parents migrates. The study incorporated the diagnostic survey method. These tools were used: Family Evaluation Scales (SOR) adapted for Poland by Andrzej Margasiński. The explorations in this study indicate, that 47% of studied temporarily single parent families are balanced models. This is evidence of the resources at the disposal of the family which, despite the disability of the child and temporary separation, is able to function properly. The conducted studies show, that 37% of temporarily single parent families are imbalanced models in the perception of adolescents with cerebral palsy. These families experience functional difficulties and require psychological and pedagogical support. There is a need for building skills related to effective coping with family stress. Especially considering, that families of an imbalanced type do not use the internal and external resources of the family system. Such a situation may deepen the disarrangement of family life. In intermediate families (16%) there are also temporary difficulties in functioning. Separation anxiety experienced by mothers may disrupt relations and introduce additional stress factors. For that reason it is important to provide support for women with difficulties coping with the emotions associated with raising handicapped adolescents and migratory separation.

Keywords: child with cerebral palsy, family, migration, parents

Procedia PDF Downloads 413
7420 Fatigue of Multiscale Nanoreinforced Composites: 3D Modelling

Authors: Leon Mishnaevsky Jr., Gaoming Dai

Abstract:

3D numerical simulations of fatigue damage of multiscale fiber reinforced polymer composites with secondary nanoclay reinforcement are carried out. Macro-micro FE models of the multiscale composites are generated automatically using Python based software. The effect of the nanoclay reinforcement (localized in the fiber/matrix interface (fiber sizing) and distributed throughout the matrix) on the crack path, damage mechanisms and fatigue behavior is investigated in numerical experiments.

Keywords: computational mechanics, fatigue, nanocomposites, composites

Procedia PDF Downloads 601
7419 Modification of Carbon-Based Gas Sensors for Boosting Selectivity

Authors: D. Zhao, Y. Wang, G. Chen

Abstract:

Gas sensors that utilize carbonaceous materials as sensing media offer numerous advantages, making them the preferred choice for constructing chemical sensors over those using other sensing materials. Carbonaceous materials, particularly nano-sized ones like carbon nanotubes (CNTs), provide these sensors with high sensitivity. Additionally, carbon-based sensors possess other advantageous properties that enhance their performance, including high stability, low power consumption for operation, and cost-effectiveness in their construction. These properties make carbon-based sensors ideal for a wide range of applications, especially in miniaturized devices created through MEMS or NEMS technologies. To capitalize on these properties, a group of chemoresistance-type carbon-based gas sensors was developed and tested against various volatile organic compounds (VOCs) and volatile inorganic compounds (VICs). The results demonstrated exceptional sensitivity to both VOCs and VICs, along with the sensor’s long-term stability. However, this broad sensitivity also led to poor selectivity towards specific gases. This project aims at addressing the selectivity issue by modifying the carbon-based sensing materials and enhancing the sensor's specificity to individual gas. Multiple groups of sensors were manufactured and modified using proprietary techniques. To assess their performance, we conducted experiments on representative sensors from each group to detect a range of VOCs and VICs. The VOCs tested included acetone, dimethyl ether, ethanol, formaldehyde, methane, and propane. The VICs comprised carbon monoxide (CO), carbon dioxide (CO2), hydrogen (H2), nitric oxide (NO), and nitrogen dioxide (NO2). The concentrations of the sample gases were all set at 50 parts per million (ppm). Nitrogen (N2) was used as the carrier gas throughout the experiments. The results of the gas sensing experiments are as follows. In Group 1, the sensors exhibited selectivity toward CO2, acetone, NO, and NO2, with NO2 showing the highest response. Group 2 primarily responded to NO2. Group 3 displayed responses to nitrogen oxides, i.e., both NO and NO2, with NO2 slightly surpassing NO in sensitivity. Group 4 demonstrated the highest sensitivity among all the groups toward NO and NO2, with NO2 being more sensitive than NO. In conclusion, by incorporating several modifications using carbon nanotubes (CNTs), sensors can be designed to respond well to NOx gases with great selectivity and without interference from other gases. Because the response levels to NO and NO2 from each group are different, the individual concentration of NO and NO2 can be deduced.

Keywords: gas sensors, carbon, CNT, MEMS/NEMS, VOC, VIC, high selectivity, modification of sensing materials

Procedia PDF Downloads 118
7418 Homelessness and Disaster Mitigation: An Exploratory Study into How Casualties Can Be Reduced with the Homeless

Authors: Blythe Maltby

Abstract:

Homeless populations are one of the sections of society most vulnerable to the effects of natural disasters. Channels of communication to these populations are limited as they lack access to mainstream modes of emergency notification, often being the last to know about state emergencies. This study aims to answer if there is a way that cities and policies be designed to help reduce casualty rates to the homeless during state emergencies, such as earthquake and tsunami preparations. The study used a qualitative research approach, namely by speaking to levels of government, homelessness charities and workers and others about preparations and their experiences with the response of state emergencies. The proposed paper may help countries identify the gaps in their preparations to help facilitate better resources to look after these vulnerable populations.

Keywords: accessibility, disaster mitigation, homeless, Vancouver

Procedia PDF Downloads 221
7417 Solitons and Universes with Acceleration Driven by Bulk Particles

Authors: A. C. Amaro de Faria Jr, A. M. Canone

Abstract:

Considering a scenario where our universe is taken as a 3d domain wall embedded in a 5d dimensional Minkowski space-time, we explore the existence of a richer class of solitonic solutions and their consequences for accelerating universes driven by collisions of bulk particle excitations with the walls. In particular it is shown that some of these solutions should play a fundamental role at the beginning of the expansion process. We present some of these solutions in cosmological scenarios that can be applied to models that describe the inflationary period of the Universe.

Keywords: solitons, topological defects, branes, kinks, accelerating universes in brane scenarios

Procedia PDF Downloads 132
7416 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply

Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele

Abstract:

In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.

Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant

Procedia PDF Downloads 169
7415 Education for Sustainable Development and the Eco School Initiative in Two Primary Schools in The North East of England

Authors: Athanasia Chatzifotiou, Karen Tait

Abstract:

Eco-school is an international initiative that offers schools the opportunity to develop practices on education for sustainable development (EfSD). Such practices need to focus on nine areas, namely: energy, water, biodiversity, school grounds, healthy living, transport, litter, waste and global citizenship. Acquiring the green flag status is the ultimate stage (silver and bronze are the other two) that is awarded by a committee external to the school and it lasts for two years. Our project focused on two such primary schools that had acquired the green flag status. The aim of our project is to describe the schools’ approach of becoming an eco-school, the practitioners’ role in promoting the values and principles of such endeavors, thus identifying the impact of EfSD. We chose the eco-schools initiative as it gives a clear and straightforward way to identify a school with an interest in EfSD. The project is important because even though EfSD attracts high attention in rhetoric, there is evidence indicating that EfSD may be neglected in practice. This paper presents part of a bigger project that aims to compare how primary schools and early years settings have approached EfSD via the eco-school initiative in the North East of England. This is a qualitative project that used a case study design to focus on the practices of two particular primary schools to gain a green flag status. A semi-structured interview was used with the lead teachers/practitioners of the schools; an audit was also conducted as part of a tour of the schools’ premises highlighting the initiatives, curriculum work, projects undertaken as well as resources available to school. A content analysis of the interview transcripts was conducted with the creation of response categories and response narratives by the two researchers first working individually and then collaboratively; the findings of the project reflected issues that concerned: a) pupils’ cognitive, physical and socio-emotional development, b) the wider community and c) the lead practitioners’ role and status in school. In relation to EfSD, our findings indicated that its impact upon these two eco-schools was rather minimal; a mismatch was identified between the eco-school practices and a holistic understanding of issues that EfSD aims to achieve. This mismatch between eco-school practices and EfSD is discussed with regard to: a) pupils’ understanding of the sustainability dimension in the topics they addressed; and b) teachers’ knowledge of sustainability and willingness to keep on such work in schools.

Keywords: eco-schools, environment, primary schools, sustainability education

Procedia PDF Downloads 242
7414 Role of Total Neoadjuvant Therapy in Sphincter Preservation in Locally Advanced Rectal Cancer: A Case Series

Authors: Arpit Gite

Abstract:

Purpose: We have evaluated the role of Total Neoadjuvant Therapy in patients with Locally Advanced Rectal cancer by giving Chemoradiotherapy followed by consolidation chemotherapy (CRT-CNCT) and, after that, the strategy of wait and watch. Methods: In this prospective case series, we evaluated the results of three locally advanced Rectal cancers, two cases Stage II (cT3N0) and one case Stage III ( cT4aN2). All three patients' growth was 4-6 cm from the anal verge. We have treated with Chemoradiotherapy to dose of 45Gy/25 Fractions to elective nodal regions (Inguinal node in anal canal Involvement)and Primary and mesorectum (Phase I) followed by 14.4Gy/8 Fractions to Primary and Mesorectum(Phase II) to a total dose of 59.4Gy/33 Fractions with concurrent chemotherapy Tab Capecitabine 825mg/m2 PO BD with Radiation therapy. After 6 weeks of completion of Chemoradiotherapy, advised six cycles of consolidative chemotherapy, CAPEOX regimen, Oxaliplatin 130mg/m2 on day 1 and Capecitabine 1000mg/m2 PO BD on days 1-14 repeated on a 21-day cycle for a total of six cycles. The primary endpoint is Disease-free survival (DFS); the secondary endpoint is adverse events related to chemoradiotherapy. Radiation toxicity is assessed by RTOG criteria, and chemotherapy toxicity is assessed by Common Terminology Criteria for Adverse Events (CTCAE) Version 5.0. Results: After 6 weeks of completion of Chemoradiotherapy, we did PET-CT of all three patients; all three patients had a clinically complete response and we advised 6 cycles of consolidative chemotherapy. After completion of consolidative chemotherapy, again PET-CT and sigmoidoscopy, all three patients had complete response on PET-CT and no lesions on sigmoidoscopy and kept all three patients on wait and watch.2 patients had Grade 2 skin toxicities,1 patient had Grade 1 skin toxicity, .2 patients had Grade 2 lower GI toxicities, and 1 patient had Grade lower GI toxicity, both according to RTOG criteria. 3 patients had Grade 2 diarrhea due to capecitabine, and 1 patient had Grade 1 thrombocytopenia due to oxaliplatin assessed by Common Terminology Criteria for Adverse Events (CTCAE) Version 5.0. Conclusion: Sphincter Preservation is possible with this regimen in those who don’t want to opt for surgery or in case of low-lying rectal cancer.

Keywords: locally advanced rectal cancer, sphincter preservation, chemoradiotherapy, consolidative chemotherapy

Procedia PDF Downloads 32
7413 Experimental Verification of Similarity Criteria for Sound Absorption of Perforated Panels

Authors: Aleksandra Majchrzak, Katarzyna Baruch, Monika Sobolewska, Bartlomiej Chojnacki, Adam Pilch

Abstract:

Scaled modeling is very common in the areas of science such as aerodynamics or fluid mechanics, since defining characteristic numbers enables to determine relations between objects under test and their models. In acoustics, scaled modeling is aimed mainly at investigation of room acoustics, sound insulation and sound absorption phenomena. Despite such a range of application, there is no method developed that would enable scaling acoustical perforated panels freely, maintaining their sound absorption coefficient in a desired frequency range. However, conducted theoretical and numerical analyses have proven that it is not physically possible to obtain given sound absorption coefficient in a desired frequency range by directly scaling only all of the physical dimensions of a perforated panel, according to a defined characteristic number. This paper is a continuation of the research mentioned above and presents practical evaluation of theoretical and numerical analyses. The measurements of sound absorption coefficient of perforated panels were performed in order to verify previous analyses and as a result find the relations between full-scale perforated panels and their models which will enable to scale them properly. The measurements were conducted in a one-to-eight model of a reverberation chamber of Technical Acoustics Laboratory, AGH. Obtained results verify theses proposed after theoretical and numerical analyses. Finding the relations between full-scale and modeled perforated panels will allow to produce measurement samples equivalent to the original ones. As a consequence, it will make the process of designing acoustical perforated panels easier and will also lower the costs of prototypes production. Having this knowledge, it will be possible to emulate in a constructed model panels used, or to be used, in a full-scale room more precisely and as a result imitate or predict the acoustics of a modeled space more accurately.

Keywords: characteristic numbers, dimensional analysis, model study, scaled modeling, sound absorption coefficient

Procedia PDF Downloads 194
7412 Survey on Big Data Stream Classification by Decision Tree

Authors: Mansoureh Ghiasabadi Farahani, Samira Kalantary, Sara Taghi-Pour, Mahboubeh Shamsi

Abstract:

Nowadays, the development of computers technology and its recent applications provide access to new types of data, which have not been considered by the traditional data analysts. Two particularly interesting characteristics of such data sets include their huge size and streaming nature .Incremental learning techniques have been used extensively to address the data stream classification problem. This paper presents a concise survey on the obstacles and the requirements issues classifying data streams with using decision tree. The most important issue is to maintain a balance between accuracy and efficiency, the algorithm should provide good classification performance with a reasonable time response.

Keywords: big data, data streams, classification, decision tree

Procedia PDF Downloads 513
7411 Research on the Impact of Spatial Layout Design on College Students’ Learning and Mental Health: Analysis Based on a Smart Classroom Renovation Project in Shanghai, China

Authors: Zhang Dongqing

Abstract:

Concern for students' mental health and the application of intelligent advanced technologies are driving changes in teaching models. The traditional teacher-centered classroom is beginning to transform into a student-centered smart interactive learning environment. Nowadays, smart classrooms are compatible with constructivist learning. This theory emphasizes the role of teachers in the teaching process as helpers and facilitators of knowledge construction, and students learn by interacting with them. The spatial design of classrooms is closely related to the teaching model and should also be developed in the direction of smart classroom design. The goal is to explore the impact of smart classroom layout on student-centered teaching environment and teacher-student interaction under the guidance of constructivist learning theory, by combining the design process and feedback analysis of the smart transformation project on the campus of Tongji University in Shanghai. During the research process, the theoretical basis of constructivist learning was consolidated through literature research and case analysis. The integration and visual field analysis of the traditional and transformed indoor floor plans were conducted using space syntax tools. Finally, questionnaire surveys and interviews were used to collect data. The main conclusions are as followed: flexible spatial layouts can promote students' learning effects and mental health; the interactivity of smart classroom layouts is different and needs to be combined with different teaching models; the public areas of teaching buildings can also improve the interactive learning atmosphere by adding discussion space. This article provides a data-based research basis for improving students' learning effects and mental health, and provides a reference for future smart classroom design.

Keywords: spatial layout, smart classroom, space syntax, renovation, educational environment

Procedia PDF Downloads 65
7410 Processes and Application of Casting Simulation and Its Software’s

Authors: Surinder Pal, Ajay Gupta, Johny Khajuria

Abstract:

Casting simulation helps visualize mold filling and casting solidification; predict related defects like cold shut, shrinkage porosity and hard spots; and optimize the casting design to achieve the desired quality with high yield. Flow and solidification of molten metals are, however, a very complex phenomenon that is difficult to simulate correctly by conventional computational techniques, especially when the part geometry is intricate and the required inputs (like thermo-physical properties and heat transfer coefficients) are not available. Simulation software is based on the process of modeling a real phenomenon with a set of mathematical formulas. It is, essentially, a program that allows the user to observe an operation through simulation without actually performing that operation. Simulation software is used widely to design equipment so that the final product will be as close to design specs as possible without expensive in process modification. Simulation software with real-time response is often used in gaming, but it also has important industrial applications. When the penalty for improper operation is costly, such as airplane pilots, nuclear power plant operators, or chemical plant operators, a mockup of the actual control panel is connected to a real-time simulation of the physical response, giving valuable training experience without fear of a disastrous outcome. The all casting simulation software has own requirements, like magma cast has only best for crack simulation. The latest generation software Auto CAST developed at IIT Bombay provides a host of functions to support method engineers, including part thickness visualization, core design, multi-cavity mold design with common gating and feeding, application of various feed aids (feeder sleeves, chills, padding, etc.), simulation of mold filling and casting solidification, automatic optimization of feeders and gating driven by the desired quality level, and what-if cost analysis. IIT Bombay has developed a set of applications for the foundry industry to improve casting yield and quality. Casting simulation is a fast and efficient solution for process for advanced tool which is the result of more than 20 years of collaboration with major industrial partners and academic institutions around the world. In this paper the process of casting simulation is studied.

Keywords: casting simulation software’s, simulation technique’s, casting simulation, processes

Procedia PDF Downloads 472
7409 The Usefulness of Premature Chromosome Condensation Scoring Module in Cell Response to Ionizing Radiation

Authors: K. Rawojć, J. Miszczyk, A. Możdżeń, A. Panek, J. Swakoń, M. Rydygier

Abstract:

Due to the mitotic delay, poor mitotic index and disappearance of lymphocytes from peripheral blood circulation, assessing the DNA damage after high dose exposure is less effective. Conventional chromosome aberration analysis or cytokinesis-blocked micronucleus assay do not provide an accurate dose estimation or radiosensitivity prediction in doses higher than 6.0 Gy. For this reason, there is a need to establish reliable methods allowing analysis of biological effects after exposure in high dose range i.e., during particle radiotherapy. Lately, Premature Chromosome Condensation (PCC) has become an important method in high dose biodosimetry and a promising treatment modality to cancer patients. The aim of the study was to evaluate the usefulness of drug-induced PCC scoring procedure in an experimental mode, where 100 G2/M cells were analyzed in different dose ranges. To test the consistency of obtained results, scoring was performed by 3 independent persons in the same mode and following identical scoring criteria. Whole-body exposure was simulated in an in vitro experiment by irradiating whole blood collected from healthy donors with 60 MeV protons and 250 keV X-rays, in the range of 4.0 – 20.0 Gy. Drug-induced PCC assay was performed on human peripheral blood lymphocytes (HPBL) isolated after in vitro exposure. Cells were cultured for 48 hours with PHA. Then to achieve premature condensation, calyculin A was added. After Giemsa staining, chromosome spreads were photographed and manually analyzed by scorers. The dose-effect curves were derived by counting the excess chromosome fragments. The results indicated adequate dose estimates for the whole-body exposure scenario in the high dose range for both studied types of radiation. Moreover, compared results revealed no significant differences between scores, which has an important meaning in reducing the analysis time. These investigations were conducted as a part of an extended examination of 60 MeV protons from AIC-144 isochronous cyclotron, at the Institute of Nuclear Physics in Kraków, Poland (IFJ PAN) by cytogenetic and molecular methods and were partially supported by grant DEC-2013/09/D/NZ7/00324 from the National Science Centre, Poland.

Keywords: cell response to radiation exposure, drug induced premature chromosome condensation, premature chromosome condensation procedure, proton therapy

Procedia PDF Downloads 348
7408 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection

Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra

Abstract:

In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.

Keywords: artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging

Procedia PDF Downloads 76
7407 A Methodological Approach to Digital Engineering Adoption and Implementation for Organizations

Authors: Sadia H. Syeda, Zain H. Malik

Abstract:

As systems continue to become more complex and the interdependencies of processes and sub-systems continue to grow and transform, the need for a comprehensive method of tracking and linking the lifecycle of the systems in a digital form becomes ever more critical. Digital Engineering (DE) provides an approach to managing an authoritative data source that links, tracks, and updates system data as it evolves and grows throughout the system development lifecycle. DE enables the developing, tracking, and sharing system data, models, and other related artifacts in a digital environment accessible to all necessary stakeholders. The DE environment provides an integrated electronic repository that enables traceability between design, engineering, and sustainment artifacts. The DE activities' primary objective is to develop a set of integrated, coherent, and consistent system models for the program. It is envisioned to provide a collaborative information-sharing environment for various stakeholders, including operational users, acquisition personnel, engineering personnel, and logistics and sustainment personnel. Examining the processes that DE can support in the systems engineering life cycle (SELC) is a primary step in the DE adoption and implementation journey. Through an analysis of the U.S Department of Defense’s (DoD) Office of the Secretary of Defense (OSD’s) Digital Engineering Strategy and their implementation, examples of DE implementation by the industry and technical organizations, this paper will provide descriptions of the current DE processes and best practices of implementing DE across an enterprise. This will help identify the capabilities, environment, and infrastructure needed to develop a potential roadmap for implementing DE practices consistent with its business strategy. A capability maturity matrix will be provided to assess the organization’s DE maturity emphasizing how all the SELC elements interlink to form a cohesive ecosystem. If implemented, DE can increase efficiency and improve the systems engineering processes' quality and outcomes.

Keywords: digital engineering, digital environment, digital maturity model, single source of truth, systems engineering life-cycle

Procedia PDF Downloads 87
7406 The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background

Authors: Yaolin Tian, Shanxiong Chen, Fujia Zhao, Xiaoyu Lin, Hailing Xiong

Abstract:

Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.

Keywords: deep learning, image fusion, image generation, layout analysis

Procedia PDF Downloads 148
7405 Indirect Genotoxicity of Diesel Engine Emission: An in vivo Study Under Controlled Conditions

Authors: Y. Landkocz, P. Gosset, A. Héliot, C. Corbière, C. Vendeville, V. Keravec, S. Billet, A. Verdin, C. Monteil, D. Préterre, J-P. Morin, F. Sichel, T. Douki, P. J. Martin

Abstract:

Air Pollution produced by automobile traffic is one of the main sources of pollutants in urban atmosphere and is largely due to exhausts of the diesel engine powered vehicles. The International Agency for Research on Cancer, which is part of the World Health Organization, classified in 2012 diesel engine exhaust as carcinogenic to humans (Group 1), based on sufficient evidence that exposure is associated with an increased risk for lung cancer. Amongst the strategies aimed at limiting exhausts in order to take into consideration the health impact of automobile pollution, filtration of the emissions and use of biofuels are developed, but their toxicological impact is largely unknown. Diesel exhausts are indeed complex mixtures of toxic substances difficult to study from a toxicological point of view, due to both the necessary characterization of the pollutants, sampling difficulties, potential synergy between the compounds and the wide variety of biological effects. Here, we studied the potential indirect genotoxicity of emission of Diesel engines through on-line exposure of rats in inhalation chambers to a subchronic high but realistic dose. Following exposure to standard gasoil +/- rapeseed methyl ester either upstream or downstream of a particle filter or control treatment, rats have been sacrificed and their lungs collected. The following indirect genotoxic parameters have been measured: (i) telomerase activity and telomeres length associated with rTERT and rTERC gene expression by RT-qPCR on frozen lungs, (ii) γH2AX quantification, representing double-strand DNA breaks, by immunohistochemistry on formalin fixed-paraffin embedded (FFPE) lung samples. These preliminary results will be then associated with global cellular response analyzed by pan-genomic microarrays, monitoring of oxidative stress and the quantification of primary DNA lesions in order to identify biological markers associated with a potential pro-carcinogenic response of diesel or biodiesel, with or without filters, in a relevant system of in vivo exposition.

Keywords: diesel exhaust exposed rats, γH2AX, indirect genotoxicity, lung carcinogenicity, telomerase activity, telomeres length

Procedia PDF Downloads 387
7404 Immuno-Modulatory Role of Weeds in Feeds of Cyprinus Carpio

Authors: Vipin Kumar Verma, Neeta Sehgal, Om Prakash

Abstract:

Cyprinus carpio has a wide spread occurrence in the lakes and rivers of Europe and Asia. Heavy losses in natural environment due to anthropogenic activities, including pollution as well as pathogenic diseases have landed this fish in IUCN red list of vulnerable species. The significance of a suitable diet in preserving the health status of fish is widely recognized. In present study, artificial feed supplemented with leaves of two weed plants, Eichhornia crassipes and Ricinus communis were evaluated for their role on the fish immune system. To achieve this objective fish were acclimatized to laboratory conditions (25 ± 1 °C; 12 L: 12D) for 10 days prior to start of experiment and divided into 4 groups: non-challenged (negative control= A), challenged [positive control (B) and experimental (C & D)]. Group A, B were fed with non-supplemented feed while group C & D were fed with feed supplemented with 5% Eichhornia crassipes and 5% Ricinus communis respectively. Supplemented feeds were evaluated for their effect on growth, health, immune system and disease resistance in fish when challenged with Vibrio harveyi. Fingerlings of C. carpio (weight, 2.0±0.5 g) were exposed with fresh overnight culture of V. harveyi through bath immunization (concentration 2 Χ 105) for 2 hours on 10 days interval for 40 days. The growth was monitored through increase in their relative weight. The rate of mortality due to bacterial infection as well as due to effect of feed was recorded accordingly. Immune response of fish was analyzed through differential leucocyte count, percentage phagocytosis and phagocytic index. The effect of V. harveyi on fish organs were examined through histo-pathological examination of internal organs like spleen, liver and kidney. The change in the immune response was also observed through gene expression analysis. The antioxidant potential of plant extracts was measured through DPPH and FRAP assay and amount of total phenols and flavonoids were calculates through biochemical analysis. The chemical composition of plant’s methanol extracts was determined by GC-MS analysis, which showed presence of various secondary metabolites and other compounds. Investigation revealed immuno-modulatory effect of plants, when supplemented with the artificial feed of fish.

Keywords: immuno-modulation, gc-ms, Cyprinus carpio, Eichhornia crassipes, Ricinus communis

Procedia PDF Downloads 481
7403 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.

Keywords: cyber security, vulnerability detection, neural networks, feature extraction

Procedia PDF Downloads 81