Search results for: size driven magnetic ordering
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8550

Search results for: size driven magnetic ordering

660 DNA Hypomethylating Agents Induced Histone Acetylation Changes in Leukemia

Authors: Sridhar A. Malkaram, Tamer E. Fandy

Abstract:

Purpose: 5-Azacytidine (5AC) and decitabine (DC) are DNA hypomethylating agents. We recently demonstrated that both drugs increase the enzymatic activity of the histone deacetylase enzyme SIRT6. Accordingly, we are comparing the changes H3K9 acetylation changes in the whole genome induced by both drugs using leukemia cells. Description of Methods & Materials: Mononuclear cells from the bone marrow of six de-identified naive acute myeloid leukemia (AML) patients were cultured with either 500 nM of DC or 5AC for 72 h followed by ChIP-Seq analysis using a ChIP-validated acetylated-H3K9 (H3K9ac) antibody. Chip-Seq libraries were prepared from treated and untreated cells using SMARTer ThruPLEX DNA- seq kit (Takara Bio, USA) according to the manufacturer’s instructions. Libraries were purified and size-selected with AMPure XP beads at 1:1 (v/v) ratio. All libraries were pooled prior to sequencing on an Illumina HiSeq 1500. The dual-indexed single-read Rapid Run was performed with 1x120 cycles at 5 pM final concentration of the library pool. Sequence reads with average Phred quality < 20, with length < 35bp, PCR duplicates, and those aligning to blacklisted regions of the genome were filtered out using Trim Galore v0.4.4 and cutadapt v1.18. Reads were aligned to the reference human genome (hg38) using Bowtie v2.3.4.1 in end-to-end alignment mode. H3K9ac enriched (peak) regions were identified using diffReps v1.55.4 software using input samples for background correction. The statistical significance of differential peak counts was assessed using a negative binomial test using all individuals as replicates. Data & Results: The data from the six patients showed significant (Padj<0.05) acetylation changes at 925 loci after 5AC treatment versus 182 loci after DC treatment. Both drugs induced H3K9 acetylation changes at different chromosomal regions, including promoters, coding exons, introns, and distal intergenic regions. Ten common genes showed H3K9 acetylation changes by both drugs. Approximately 84% of the genes showed an H3K9 acetylation decrease by 5AC versus 54% only by DC. Figures 1 and 2 show the heatmaps for the top 100 genes and the 99 genes showing H3K9 acetylation decrease after 5AC treatment and DC treatment, respectively. Conclusion: Despite the similarity in hypomethylating activity and chemical structure, the effect of both drugs on H3K9 acetylation change was significantly different. More changes in H3K9 acetylation were observed after 5 AC treatments compared to DC. The impact of these changes on gene expression and the clinical efficacy of these drugs requires further investigation.

Keywords: DNA methylation, leukemia, decitabine, 5-Azacytidine, epigenetics

Procedia PDF Downloads 147
659 Participatory Cartography for Disaster Reduction in Pogreso, Yucatan Mexico

Authors: Gustavo Cruz-Bello

Abstract:

Progreso is a coastal community in Yucatan, Mexico, highly exposed to floods produced by severe storms and tropical cyclones. A participatory cartography approach was conducted to help to reduce floods disasters and assess social vulnerability within the community. The first step was to engage local authorities in risk management to facilitate the process. Two workshop were conducted, in the first, a poster size printed high spatial resolution satellite image of the town was used to gather information from the participants: eight women and seven men, among them construction workers, students, government employees and fishermen, their ages ranged between 23 and 58 years old. For the first task, participants were asked to locate emblematic places and place them in the image to familiarize with it. Then, they were asked to locate areas that get flooded, the buildings that they use as refuges, and to list actions that they usually take to reduce vulnerability, as well as to collectively come up with others that might reduce disasters. The spatial information generated at the workshops was digitized and integrated into a GIS environment. A printed version of the map was reviewed by local risk management experts, who validated feasibility of proposed actions. For the second workshop, we retrieved the information back to the community for feedback. Additionally a survey was applied in one household per block in the community to obtain socioeconomic, prevention and adaptation data. The information generated from the workshops was contrasted, through T and Chi Squared tests, with the survey data in order to probe the hypothesis that poorer or less educated people, are less prepared to face floods (more vulnerable) and live near or among higher presence of floods. Results showed that a great majority of people in the community are aware of the hazard and are prepared to face it. However, there was not a consistent relationship between regularly flooded areas with people’s average years of education, house services, or house modifications against heavy rains to be prepared to hazards. We could say that the participatory cartography intervention made participants aware of their vulnerability and made them collectively reflect about actions that can reduce disasters produced by floods. They also considered that the final map could be used as a communication and negotiation instrument with NGO and government authorities. It was not found that poorer and less educated people are located in areas with higher presence of floods.

Keywords: climate change, floods, Mexico, participatory mapping, social vulnerability

Procedia PDF Downloads 113
658 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 51
657 Heat Transfer Performance of a Small Cold Plate with Uni-Directional Porous Copper for Cooling Power Electronics

Authors: K. Yuki, R. Tsuji, K. Takai, S. Aramaki, R. Kibushi, N. Unno, K. Suzuki

Abstract:

A small cold plate with uni-directional porous copper is proposed for cooling power electronics such as an on-vehicle inverter with the heat generation of approximately 500 W/cm2. The uni-directional porous copper with the pore perpendicularly orienting the heat transfer surface is soldered to a grooved heat transfer surface. This structure enables the cooling liquid to evaporate in the pore of the porous copper and then the vapor to discharge through the grooves. In order to minimize the cold plate, a double flow channel concept is introduced for the design of the cold plate. The cold plate consists of a base plate, a spacer, and a vapor discharging plate, totally 12 mm in thickness. The base plate has multiple nozzles of 1.0 mm in diameter for the liquid supply and 4 slits of 2.0 mm in width for vapor discharging, and is attached onto the top surface of the porous copper plate of 20 mm in diameter and 5.0 mm in thickness. The pore size is 0.36 mm and the porosity is 36 %. The cooling liquid flows into the porous copper as an impinging jet flow from the multiple nozzles, and then the vapor, which is generated in the pore, is discharged through the grooves and the vapor slits outside the cold plate. A heated test section consists of the cold plate, which was explained above, and a heat transfer copper block with 6 cartridge heaters. The cross section of the heat transfer block is reduced in order to increase the heat flux. The top surface of the block is the grooved heat transfer surface of 10 mm in diameter at which the porous copper is soldered. The grooves are fabricated like latticework, and the width and depth are 1.0 mm and 0.5 mm, respectively. By embedding three thermocouples in the cylindrical part of the heat transfer block, the temperature of the heat transfer surface ant the heat flux are extrapolated in a steady state. In this experiment, the flow rate is 0.5 L/min and the flow velocity at each nozzle is 0.27 m/s. The liquid inlet temperature is 60 °C. The experimental results prove that, in a single-phase heat transfer regime, the heat transfer performance of the cold plate with the uni-directional porous copper is 2.1 times higher than that without the porous copper, though the pressure loss with the porous copper also becomes higher than that without the porous copper. As to the two-phase heat transfer regime, the critical heat flux increases by approximately 35% by introducing the uni-directional porous copper, compared with the CHF of the multiple impinging jet flow. In addition, we confirmed that these heat transfer data was much higher than that of the ordinary single impinging jet flow. These heat transfer data prove high potential of the cold plate with the uni-directional porous copper from the view point of not only the heat transfer performance but also energy saving.

Keywords: cooling, cold plate, uni-porous media, heat transfer

Procedia PDF Downloads 295
656 Flow Field Optimization for Proton Exchange Membrane Fuel Cells

Authors: Xiao-Dong Wang, Wei-Mon Yan

Abstract:

The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.

Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection

Procedia PDF Downloads 296
655 Study of Phase Separation Behavior in Flexible Polyurethane Foam

Authors: El Hatka Hicham, Hafidi Youssef, Saghiri Khalid, Ittobane Najim

Abstract:

Flexible polyurethane foam (FPUF) is a low-density cellular material generally used as a cushioning material in many applications such as furniture, bedding, packaging, etc. It is commercially produced during a continuous process, where a reactive mixture of foam chemicals is poured onto a moving conveyor. FPUFs are produced by the catalytic balancing of two reactions involved, the blowing reaction (isocyanate-water) and the gelation reaction (isocyanate-polyol). The microstructure of FPUF is generally composed of soft phases (polyol phases) and rigid domains that separate into two domains of different sizes: the rigid polyurea microdomains and the macrodomains (larger aggregates). The morphological features of FPUF are strongly influenced by the phase separation morphology that plays a key role in determining the global FPUF properties. This phase-separated morphology results from a thermodynamic incompatibility between soft segments derived from aliphatic polyether and hard segments derived from the commonly used aromatic isocyanate. In order to improve the properties of FPUF against the different stresses faced by this material during its use, we report in this work a study of the phase separation phenomenon in FPUF that has been examined using SAXS WAXS and FTIR. Indeed, we have studied with these techniques the effect of water, isocyanates, and alkaline chlorides on the phase separation behavior. SAXS was used to study the morphology of the microphase separated, WAXS to examine the nature of the hard segment packing, and FTIR to investigate the hydrogen bonding characteristics of the materials studied. The prepared foams were shown to have different levels of urea phase connectivity; the increase in water content in the FPUF formulation leads to an increase in the amount of urea formed and consequently the increase of the size of urea aggregates formed. Alkali chlorides (NaCl, KCl, and LiCl) incorporated into FPUF formulations show that is the ability to prevent hydrogen bond formation and subsequently alter the rigid domains. FPUFs prepared by different isocyanate structures showed that urea aggregates are difficult to be formed in foams prepared by asymmetric diisocyanate, while are more easily formed in foams prepared by symmetric and aliphatic diisocyanate.

Keywords: flexible polyurethane foam, hard segments, phase separation, soft segments

Procedia PDF Downloads 162
654 Study of Open Spaces in Urban Residential Clusters in India

Authors: Renuka G. Oka

Abstract:

From chowks to streets to verandahs to courtyards; residential open spaces are very significantly placed in traditional urban neighborhoods of India. At various levels of intersection, the open spaces with their attributes like juxtaposition with the built fabric, scale, climate sensitivity and response, multi-functionality, etc. reflect and respond to the patterns of human interactions. Also, these spaces tend to be quite well utilized. On the other hand, it is a common specter to see an imbalanced utilization of open spaces in newly/recently planned residential clusters. This is maybe due to lack of activity generators around or wrong locations or excess provisions or improper incorporation of aforementioned design attributes. These casual observations suggest the necessity for a systematic study of current residential open spaces. The exploratory study thus attempts to draw lessons through a structured inspection of residential open spaces to understand the effective environment as revealed through their use patterns. Here, residential open spaces are considered in a wider sense to incorporate all the un-built fabric around. These thus, include both use spaces and access space. For the study, open spaces in ten exemplary housing clusters/societies built during the last ten years across India are studied. A threefold inquiry is attempted in this direction. The first relates to identifying and determining the effects of various physical functions like space organization, size, hierarchy, thermal and optical comfort, etc. on the performance of residential open spaces. The second part sets out to understand socio-cultural variations in values, lifestyle, and beliefs which determine activity choices and behavioral preferences of users for respective residential open spaces. The third inquiry further observes the application of these research findings to the design process to derive meaningful and qualitative design advice. However, the study also emphasizes to develop a suitable framework of analysis and to carve out appropriate methods and approaches to probe into these aspects of the inquiry. Given this emphasis, a considerable portion of the research details out the conceptual framework for the study. This framework is supported by an in-depth search of available literature. The findings are worked out for design solutions which integrate the open space systems with the overall design process for residential clusters. The open spaces in residential areas present great complexities both in terms of their use patterns and determinants of their functional responses. The broad aim of the study is, therefore, to arrive at reconsideration of standards and qualitative parameters used by designers – on the basis of more substantial inquiry into the use patterns of open spaces in residential areas.

Keywords: open spaces, physical and social determinants, residential clusters, use patterns

Procedia PDF Downloads 148
653 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 158
652 Analysis of Sea Waves Characteristics and Assessment of Potential Wave Power in Egyptian Mediterranean Waters

Authors: Ahmed A. El-Gindy, Elham S. El-Nashar, Abdallah Nafaa, Sameh El-Kafrawy

Abstract:

The generation of energy from marine energy became one of the most preferable resources since it is a clean source and friendly to environment. Egypt has long shores along Mediterranean with important cities that need energy resources with significant wave energy. No detailed studies have been done on wave energy distribution in the Egyptian waters. The objective of this paper is to assess the energy wave power available in the Egyptian waters for the choice of the most suitable devices to be used in this area. This paper deals the characteristics and power of the offshore waves in the Egyptian waters. Since the field observations of waves are not frequent and need much technical work, the European Centre for Medium-Range Weather Forecasts (ECMWF) interim reanalysis data in Mediterranean, with a grid size 0.75 degree, which is a relatively course grid, are considered in the present study for preliminary assessment of sea waves characteristics and power. The used data covers the period from 2012 to 2014. The data used are significant wave height (swh), mean wave period (mwp) and wave direction taken at six hourly intervals, at seven chosen stations, and at grid points covering the Egyptian waters. The wave power (wp) formula was used to calculate energy flux. Descriptive statistical analysis including monthly means and standard deviations of the swh, mwp, and wp. The percentiles of wave heights and their corresponding power are done, as a tool of choice of the best technology suitable for the site. The surfer is used to show spatial distributions of wp. The analysis of data at chosen 7 stations determined the potential of wp off important Egyptian cities. Offshore of Al Saloum and Marsa Matruh, the highest wp occurred in January and February (16.93-18.05) ± (18.08-22.12) kw/m while the lowest occurred in June and October (1.49-1.69) ± (1.45-1.74) kw/m. In front of Alexandria and Rashid, the highest wp occurred in January and February (16.93-18.05) ± (18.08-22.12) kw/m while the lowest occurred in June and September (1.29-2.01) ± (1.31-1.83) kw/m. In front of Damietta and Port Said, the highest wp occurred in February (14.29-17.61) ± (21.61-27.10) kw/m and the lowest occurred in June (0.94-0.96) ± (0.71-0.72) kw/m. In winter, the probabilities of waves higher than 0.8 m in percentage were, at Al Saloum and Marsa Matruh (76.56-80.33) ± (11.62-12.05), at Alexandria and Rashid (73.67-74.79) ± (16.21-18.59) and at Damietta and Port Said (66.28-68.69) ± (17.88-17.90). In spring, the percentiles were, at Al Saloum and Marsa Matruh, (48.17-50.92) ± (5.79-6.56), at Alexandria and Rashid, (39.38-43.59) ± (9.06-9.34) and at Damietta and Port Said, (31.59-33.61) ± (10.72-11.25). In summer, the probabilities were, at Al Saloum and Marsa Matruh (57.70-66.67) ± (4.87-6.83), at Alexandria and Rashid (59.96-65.13) ± (9.14-9.35) and at Damietta and Port Said (46.38-49.28) ± (10.89-11.47). In autumn, the probabilities were, at Al Saloum and Marsa Matruh (58.75-59.56) ± (2.55-5.84), at Alexandria and Rashid (47.78-52.13) ± (3.11-7.08) and at Damietta and Port Said (41.16-42.52) ± (7.52-8.34).

Keywords: distribution of sea waves energy, Egyptian Mediterranean waters, waves characteristics, waves power

Procedia PDF Downloads 191
651 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning

Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz

Abstract:

Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.

Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics

Procedia PDF Downloads 118
650 The Development of Noctiluca scintillans Algal Bloom in Coastal Waters of Muscat, Sulanate of Oman

Authors: Aysha Al Sha'aibi

Abstract:

Algal blooms of the dinoflagellate species Noctiluca scintillans became frequent events in Omani waters. The current study aims at elucidating the abundance, size variation and observations on the feeding mechanism performed by this species during the winter bloom. An attempt was made, to relate observed biological parameters of the Noctiluca population to environmental factors. Field studies spanned the period from December 2014 to April 2015. Samples were collected from Bandar Rawdah (Muscat region) by Bongo nets, twice per week, from the surface and the integrated upper mixed layer. The measured environmental variables were: temperature, salinity, dissolved oxygen, chlorophyll a, turbidity, nitrite, phosphate, wind speed and rainfall. During the winter bloom (from December 2014 through February 2015), the abundance exhibited the highest concentration on 17 February (640.24×106 cell.L-1) in oblique samples and 83.9x103 cell.L-1 in surface samples, with a subsequent decline up to the end of April. The average number of food vacuoles inside Noctiluca cells was 1.5 per cell; the percentage of feeding Noctiluca compared to the entire population varied from 0.01% to 0.03%. Both the surface area of the Noctiluca symbionts (Pedinomonas noctilucae) and cell diameter were maximal in December. In oblique samples the highest average cell diameter and the surface area of symbiont algae were 751.7 µm and 179.2x103 µm2 respectively. In surface samples, highest average cell diameter and the surface area of symbionts were 760 µm and 284.05x103 µm2 respectively. No significant correlations were detected between Noctiluca’s biological parameters and environmental variables except for the correlation between cell diameter and chlorophyll a, also between symbiotic algae surface area and chlorophyll a. The high correlation of chlorophyll a was as a reason of endosymbiotic algae Pedinomonas noctilucae and green Noctiluca enhanced chlorophyll during bloom. All correlations among biological parameters were significant; they are perhaps one of major factors that mediating high growth rates, generating millions of cell per liter in a short time range. The results gained from this study will provide a beneficial background for understanding deeply the development of coastal algal blooms of Noctiluca scintillans. Moreover, results could be used in different applications related to marine environment.

Keywords: abundance, feeding activities, Noctiluca scintillans, Oman

Procedia PDF Downloads 435
649 Consumer Reactions to Hospitality Social Robots Across Cultures

Authors: Lisa C. Wan

Abstract:

To address customers’ safety concerns, more and more hospitality companies are using service robots to provide contactless services. For many companies, the switch from human employees to service robots to lower the contagion risk during and after the pandemic may be permanent. The market size for hospitality service robots is estimated to reach US$3,083 million by 2030, registering a CAGR of 25.5% from 2021 to 2030. While service robots may effectively reduce interpersonal contacts and health risk, it also eliminates the social interactions desired by customers. A recent survey revealed that more than 60% of Americans feel lonely during the pandemic. People who are traveling can also feel isolated when they are at a hotel far away from home. It is therefore important for the hospitality companies to understand whether and how social robots can remedy deprived social connection not only due to a pandemic but also for a trip away from home in the post-pandemic future. This study complements extant hospitality literature regarding service robots by examining how service robots can forge social connections with customers. The service robots we are concerned with are those that can interact and communicate with humans; we broadly refer to them as social robots. We define a social robot as one that is equipped with interaction capabilities – it can either be one that directly interacts with the consumer or one through which the consumer can interact with other humans. Drawing on the theories of mind perception, we propose that service robots can foster social connectedness and increase the perception of social competence of the robot, but these effects will vary across cultures. By applying theories of mind perception and cultural dimension to the hospitality setting, this study shows that service robots that are equipped with social connection function will receive a more favorable evaluation from the consumers and enhance their intention to visit a hotel. The more favorable reaction to social robots is stronger for collectivists (i.e., Asians) than individualists (i.e., Westerners). To our knowledge, this is among the first studies to investigate the impact of culture on consumer reactions to social robots in the hospitality and tourism context. Moreover, this research extends the literature by examining whether people imbue non-human entities (i.e., telepresence social robots) with social competence. Because social robots that foster social connection with humans are still rare in hospitality and tourism, this aspect is an underexplored research area. Our study is the first to propose that, just like their human counterparts that possess relevant social skills, social robots’ interaction capabilities (e.g., telepresence robots) are used to infer social competence. More studies will be conducted to examine consumer reactions to humanoid (vs. non-humanoid) robot in the hospitality settings to generalize our research findings.

Keywords: service robots, COVID-19, social connection, cultures

Procedia PDF Downloads 103
648 A Model of the Universe without Expansion of Space

Authors: Jia-Chao Wang

Abstract:

A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.

Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction

Procedia PDF Downloads 133
647 Emotions Evoked by Robots - Comparison of Older Adults and Students

Authors: Stephanie Lehmann, Esther Ruf, Sabina Misoch

Abstract:

Background: Due to demographic change and shortage of skilled nursing staff, assistive robots are built to support older adults at home and nursing staff in care institutions. When assistive robots facilitate tasks that are usually performed by humans, user acceptance is essential. Even though they are an important aspect of acceptance, emotions towards different assistive robots and different situations of robot-use have so far not been examined in detail. The appearance of assistive robots can trigger emotions that affect their acceptance. Acceptance of robots is assumed to be greater when they look more human-like; however, too much human similarity can be counterproductive. Regarding different groups, it is assumed that older adults have a more negative attitude towards robots than younger adults. Within the framework of a simulated robot study, the aim was to investigate emotions of older adults compared to students towards robots with different appearances and in different situations and so contribute to a deeper view of the emotions influencing acceptance. Methods: In a questionnaire study, vignettes were used to assess emotions toward robots in different situations and of different appearance. The vignettes were composed of two situations (service and care) shown by video and four pictures of robots varying in human similarity (machine-like to android). The combination of the vignettes was randomly distributed to the participants. One hundred forty-two older adults and 35 bachelor students of nursing participated. They filled out a questionnaire that surveyed 30 positive and 30 negative emotions. For each group, older adults and students, a sum score of “positive emotions” and a sum score of “negative emotions” was calculated. Mean value, standard deviation, or n for sample size and % for frequencies, according to the scale level, were calculated. For differences in the scores of positive and negative emotions for different situations, t-tests were calculated. Results: Overall, older adults reported significantly more positive emotions than students towards robots in general. Students reported significantly more negative emotions than older adults. Regarding the two different situations, the results were similar for the care situation, with older adults reporting more positive emotions than students and less negative emotions than students. In the service situation, older adults reported significantly more positive emotions; negative emotions did not differ significantly from the students. Regarding the appearance of the robot, there were no significant differences in emotions reported towards the machine-like, the mechanical-human-like and the human-like appearance. Regarding the android robot, students reported significantly more negative emotions than older adults. Conclusion: There were differences in the emotions reported by older adults compared to students. Older adults reported more positive emotions, and students reported more negative emotions towards robots in different situations and with different appearances. It can be assumed that older adults have a different attitude towards the use of robots than younger people, especially young adults in the health sector. Therefore, the use of robots in the service or care sector should not be rejected rashly based on the attitudes of younger persons, without considering the attitudes of older adults equally.

Keywords: emotions, robots, seniors, young adults

Procedia PDF Downloads 465
646 Experimental Investigation on the Effect of Prestress on the Dynamic Mechanical Properties of Conglomerate Based on 3D-SHPB System

Authors: Wei Jun, Liao Hualin, Wang Huajian, Chen Jingkai, Liang Hongjun, Liu Chuanfu

Abstract:

Kuqa Piedmont is rich in oil and gas resources and has great development potential in Tarim Basin, China. However, there is a huge thick gravel layer developed with high content, wide distribution and variation in size of gravel, leading to the condition of strong heterogeneity. So that, the drill string is in a state of severe vibration and the drill bit is worn seriously while drilling, which greatly reduces the rock-breaking efficiency, and there is a complex load state of impact and three-dimensional in-situ stress acting on the rock in the bottom hole. The dynamic mechanical properties and the influencing factors of conglomerate, the main component of gravel layer, are the basis of engineering design and efficient rock breaking method and theoretical research. Limited by the previously experimental technique, there are few works published yet about conglomerate, especially rare in dynamic load. Based on this, a kind of 3D SHPB system, three-dimensional prestress, can be applied to simulate the in-situ stress characteristics, is adopted for the dynamic test of the conglomerate. The results show that the dynamic strength is higher than its static strength obviously, and while the three-dimensional prestress is 0 and the loading strain rate is 81.25~228.42 s-1, the true triaxial equivalent strength is 167.17~199.87 MPa, and the strong growth factor of dynamic and static is 1.61~1.92. And the higher the impact velocity, the greater the loading strain rate, the higher the dynamic strength and the greater the failure strain, which all increase linearly. There is a critical prestress in the impact direction and its vertical direction. In the impact direction, while the prestress is less than the critical one, the dynamic strength and the loading strain rate increase linearly; otherwise, the strength decreases slightly and the strain rate decreases rapidly. In the vertical direction of impact load, the strength increases and the strain rate decreases linearly before the critical prestress, after that, oppositely. The dynamic strength of the conglomerate can be reduced properly by reducing the amplitude of impact load so that the service life of rock-breaking tools can be prolonged while drilling in the stratum rich in gravel. The research has important reference significance for the speed-increasing technology and theoretical research while drilling in gravel layer.

Keywords: huge thick gravel layer, conglomerate, 3D SHPB, dynamic strength, the deformation characteristics, prestress

Procedia PDF Downloads 209
645 A Comprehensive Study on Freshwater Aquatic Life Health Quality Assessment Using Physicochemical Parameters and Planktons as Bio Indicator in a Selected Region of Mahaweli River in Kandy District, Sri Lanka

Authors: S. M. D. Y. S. A. Wijayarathna, A. C. A. Jayasundera

Abstract:

Mahaweli River is the longest and largest river in Sri Lanka and it is the major drinking water source for a large portion of 2.5 million inhabitants in the Central Province. The aim of this study was to the determination of water quality and aquatic life health quality in a selected region of Mahaweli River. Six sampling locations (Site 1: 7° 16' 50" N, 80° 40' 00" E; Site 2: 7° 16' 34" N, 80° 40' 27" E; Site 3: 7° 16' 15" N, 80° 41' 28" E; Site 4: 7° 14' 06" N, 80° 44' 36" E; Site 5: 7° 14' 18" N, 80° 44' 39" E; Site 6: 7° 13' 32" N, 80° 46' 11" E) with various anthropogenic activities at bank of the river were selected for a period of three months from Tennekumbura Bridge to Victoria Reservoir. Temperature, pH, Electrical Conductivity (EC), Total Dissolved Solids (TDS), Dissolved Oxygen (DO), 5-day Biological Oxygen Demand (BOD5), Total Suspended Solids (TSS), hardness, the concentration of anions, and metal concentration were measured according to the standard methods, as physicochemical parameters. Planktons were considered as biological parameters. Using a plankton net (20 µm mesh size), surface water samples were collected into acid washed dried vials and were stored in an ice box during transportation. Diversity and abundance of planktons were identified within 4 days of sample collection using standard manuals of plankton identification under the light microscope. Almost all the measured physicochemical parameters were within the CEA standards limits for aquatic life, Sri Lanka Standards (SLS) or World Health Organization’s Guideline for drinking water. Concentration of orthophosphate ranged between 0.232 to 0.708 mg L-1, and it has exceeded the standard limit of aquatic life according to CEA guidelines (0.400 mg L-1) at Site 1 and Site 2, where there is high disturbance by cultivations and close households. According to the Pearson correlation (significant correlation at p < 0.05), it is obvious that some physicochemical parameters (temperature, DO, TDS, TSS, phosphate, sulphate, chloride fluoride, and sodium) were significantly correlated to the distribution of some plankton species such as Aulocoseira, Navicula, Synedra, Pediastrum, Fragilaria, Selenastrum, Oscillataria, Tribonema and Microcystis. Furthermore, species that appear in blooms (Aulocoseira), organic pollutants (Navicula), and phosphate high eutrophic water (Microcystis) were found, indicating deteriorated water quality in Mahaweli River due to agricultural activities, solid waste disposal, and release of domestic effluents. Therefore, it is necessary to improve environmental monitoring and management to control the further deterioration of water quality of the river.

Keywords: bio indicator, environmental variables, planktons, physicochemical parameters, water quality

Procedia PDF Downloads 106
644 Cognitive Control Moderates the Concurrent Effect of Autistic and Schizotypal Traits on Divergent Thinking

Authors: Julie Ramain, Christine Mohr, Ahmad Abu-Akel

Abstract:

Divergent thinking—a cognitive component of creativity—and particularly the ability to generate unique and novel ideas, has been linked to both autistic and schizotypal traits. However, to our knowledge, the concurrent effect of these trait dimensions on divergent thinking has not been investigated. Moreover, it has been suggested that creativity is associated with different types of attention and cognitive control, and consequently how information is processed in a given context. Intriguingly, consistent with the diametric model, autistic and schizotypal traits have been associated with contrasting attentional and cognitive control styles. Positive schizotypal traits have been associated with reactive cognitive control and attentional flexibility, while autistic traits have been associated with proactive cognitive control and the increased focus of attention. The current study investigated the relationship between divergent thinking, autistic and schizotypal traits and cognitive control in a non-clinical sample of 83 individuals (Males = 42%; Mean age = 22.37, SD = 2.93), sufficient to detect a medium effect size. Divergent thinking was evaluated in an adapted version of-of the Figural Torrance Test of Creative Thinking. Crucially, since we were interested in testing divergent thinking productivity across contexts, participants were asked to generate items from basic shapes in four different contexts. The variance of the proportion of unique to total responses across contexts represented a measure of context adaptability, with lower variance indicating increased context adaptability. Cognitive control was estimated with the Behavioral Proactive Index of the AX-CPT task, with higher scores representing the ability to actively maintain goal-relevant information in a sustained/anticipatory manner. Autistic and schizotypal traits were assessed with the Autism Quotient (AQ) and the Community Assessment of Psychic Experiences (CAPE-42). Generalized linear models revealed a 3-way interaction of autistic and positive schizotypal traits, and proactive cognitive control, associated with increased context adaptability. Specifically, the concurrent effect of autistic and positive schizotypal traits on increased context adaptability was moderated by the level of proactive control and was only significant when proactive cognitive control was high. Our study reveals that autistic and positive schizotypal traits interactively facilitate the capacity to generate unique ideas across various contexts. However, this effect depends on cognitive control mechanisms indicative of the ability to proactively maintain attention when needed. The current results point to a unique profile of divergent thinkers who have the ability to respectively tap both systematic and flexible processing modes within and across contexts. This is particularly intriguing as such combination of phenotypes has been proposed to explain the genius of Beethoven, Nash, and Newton.

Keywords: autism, schizotypy, creativity, cognitive control

Procedia PDF Downloads 137
643 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong

Abstract:

This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.

Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 239
642 Characterization of Dota-Girentuximab Conjugates for Radioimmunotherapy

Authors: Tais Basaco, Stefanie Pektor, Josue A. Moreno, Matthias Miederer, Andreas Türler

Abstract:

Radiopharmaceuticals based in monoclonal anti-body (mAb) via chemical linkers have become a potential tool in nuclear medicine because of their specificity and the large variability and availability of therapeutic radiometals. It is important to identify the conjugation sites and number of attached chelator to mAb to obtain radioimmunoconjugates with required immunoreactivity and radiostability. Girentuximab antibody (G250) is a potential candidate for radioimmunotherapy of clear cell carcinomas (RCCs) because it is reactive with CAIX antigen, a transmembrane glycoprotein overexpressed on the cell surface of most ( > 90%) (RCCs). G250 was conjugated with the bifunctional chelating agent DOTA (1,4,7,10-Tetraazacyclododecane-N,N’,N’’,N’’’-tetraacetic acid) via a benzyl-thiocyano group as a linker (p-SCN-Bn-DOTA). DOTA-G250 conjugates were analyzed by size exclusion chromatography (SE-HPLC) and by electrophoresis (SDS-PAGE). The potential site-specific conjugation was identified by liquid chromatography–mass spectrometry (LC/MS-MS) and the number of linkers per molecule of mAb was calculated using the molecular weight (MW) measured by matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS). The average number obtained in the conjugates in non-reduced conditions was between 8-10 molecules of DOTA per molecule of mAb. The average number obtained in the conjugates in reduced conditions was between 1-2 and 3-4 molecules of DOTA per molecule of mAb in the light chain (LC) and heavy chain (HC) respectively. Potential DOTA modification sites of the chelator were identified in lysine residues. The biological activity of the conjugates was evaluated by flow cytometry (FACS) using CAIX negative (SKRC-18) and CAIX positive (SKRC-52). The DOTA-G250 conjugates were labelled with 177Lu with a radiochemical yield > 95% reaching specific activities of 12 MBq/µg. The stability in vitro of different types of radioconstructs was analyzed in human serum albumin (HSA). The radiostability of 177Lu-DOTA-G250 at high specific activity was increased by addition of sodium ascorbate after the labelling. The immunoreactivity was evaluated in vitro and in vivo. Binding to CAIX positive cells (SK-RC-52) at different specific activities was higher for conjugates with less DOTA content. Protein dose was optimized in mice with subcutaneously growing SK-RC-52 tumors using different amounts of 177Lu- DOTA-G250.

Keywords: mass spectrometry, monoclonal antibody, radiopharmaceuticals, radioimmunotheray, renal cancer

Procedia PDF Downloads 307
641 Phytoremediation of Hydrocarbon-Polluted Soils: Assess the Potentialities of Six Tropical Plant Species

Authors: Pulcherie Matsodoum Nguemte, Adrien Wanko Ngnien, Guy Valerie Djumyom Wafo, Ives Magloire Kengne Noumsi, Pierre Francois Djocgoue

Abstract:

The identification of plant species with the capacity to grow on hydrocarbon-polluted soils is an essential step for phytoremediation. In view of developing phytoremediation in Cameroon, floristic surveys have been conducted in 4 cities (Douala, Yaounde, Limbe, and Kribi). In each city, 13 hydrocarbon-polluted, as well as unpolluted sites (control), have been investigated using quadrat method. 106 species belonging to 76 genera and 30 families have been identified on hydrocarbon-polluted sites, unlike the control sites where floristic diversity was much higher (166 species contained in 125 genera and 50 families). Poaceae, Cyperaceae, Asteraceae and Amaranthaceae have higher taxonomic richness on polluted sites (16, 15,10 and 8 taxa, respectively). Shannon diversity index of the hydrocarbon-polluted sites (1.6 to 2.7 bits/ind.) were significantly lower than the control sites (2.7 to 3.2 bits/ind.). Based on a relative frequency > 10% and abundance > 7%, this study highlights more than ten plants predisposed to be effective in the cleaning-up attempts of soils contaminated by hydrocarbons. Based on the floristic indicators, 6 species (Eleusine indica (L.) Gaertn., Cynodon dactylon (L.) Pers., Alternanthera sessilis (L.) R. Br. ex DC †, Commelinpa benghalensis L., Cleome ciliata Schum. & Thonn. and Asystasia gangetica (L.) T. Anderson) were selected for a study to determine their capacity to remediate a soil contaminated with fuel oil (82.5 ml/ kg of soil). The experiments lasting 150 days takes into account three modalities - Tn: uncontaminated soils planted (6) To contaminated soils unplanted (3) and Tp: contaminated soil planted (18) – randomized arranged. 3 on 6 species (Eleusine indica, Cynodon dactylon, and Alternanthera sessilis) survived the climatic and soil conditions. E. indica presents a significantly higher growth rate for density and leaf area while C. dactylon had a significantly higher growth rate for stem size and leaf numbers. A. sessilis showed stunted growth and development throughout the experimental period. The species Eleusine indica (L.) Gaertn. and Cynodon dactylon (L.) Pers. can be qualified as polluo-tolerant plant species; polluo-tolerance being the ability of a species to survive and develop in the midst subject to extreme physical and chemical disturbances.

Keywords: Cameroon, cleaning-up, floristic surveys, phytoremediation

Procedia PDF Downloads 243
640 Periurban Landscape as an Opportunity Field to Solve Ecological Urban Conflicts

Authors: Cristina Galiana Carballo, Ibon Doval Martínez

Abstract:

Urban boundaries often result in a controversial limit between countryside and city in Europe. This territory is normally defined by the very limited land uses and the abundance of open space. The dimension and dynamics of peri-urbanization in the last decades have increased this land stock, which has influenced/impacted in several factors in terms of economic costs (maintenance, transport), ecological disturbances of the territory and changes in inhabitant´s behaviour. In an increasingly urbanised world and a growing urban population, cities also face challenges such as Climate Change. In this context, new near-future corrective trends including circular economies for local food supply or decentralised waste management became key strategies towards more sustainable urban models. Those new solutions need to be planned and implemented considering the potential conflict with current land uses. The city of Vitoria-Gasteiz (Basque Country, Spain) has triplicated land consumption per habitant in 10 years, resulting in a vast extension of low-density urban type confronting rural land and threatening agricultural uses, landscape and urban sustainability. Urban planning allows managing and optimum use allocation based on soil vocation and socio-ecosystem needs, while peri-urban space arises as an opportunity for developing different uses which do not match either within the compact city, not in open agricultural lands, such as medium-size agrocomposting systems or biomass plants. Therefore, a qualitative multi-criteria methodology has been developed for Vitoria-Gasteiz city to assess the spatial definition of peri-urban land. Therefore, a qualitative multi-criteria methodology has been developed for Vitoria-Gasteiz city to assess the spatial definition of peri-urban land. Climate change and circular economy were identified as frameworks where to determine future land, soil vocation and urban planning requirements which eventually become estimations of required local food and renewable energy supply along with alternative waste management system´s implementation. By means of it, it has been developed an urban planning proposal which overcomes urban-non urban dichotomy in Vitoria-Gasteiz. The proposal aims to enhance rural system and improve urban sustainability performance through the normative recognition of an agricultural peri-urban belt.

Keywords: landscape ecology, land-use management, periurban, urban planning

Procedia PDF Downloads 163
639 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps

Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo

Abstract:

With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.

Keywords: interactive applications, power management, QoS, Web apps, WebGL

Procedia PDF Downloads 192
638 Prospects for the Development of e-Commerce in Georgia

Authors: Nino Damenia

Abstract:

E-commerce opens a new horizon for business development, which is why the presence of e-commerce is a necessary condition for the formation, growth, and development of the country's economy. Worldwide, e-commerce turnover is growing at a high rate every year, as the electronic environment provides great opportunities for product promotion. E-commerce in Georgia is developing at a fast pace, but it is still a relatively young direction in the country's economy. Movement restrictions and other public health measures caused by the COVID-19 pandemic have reduced economic activity in most economic sectors and countries, significantly affecting production, distribution, and consumption. The pandemic has accelerated digital transformation. Digital solutions enable people and businesses to continue part of their economic and social activities remotely. This has also led to the growth of e-commerce. According to the data of the National Statistics Service of Georgia, the share of online trade is higher in cities (27.4%) than in rural areas (9.1%). The COVID-19 pandemic has forced local businesses to expand their digital offerings. The size of the local market increased 3.2 times in 2020 to 138 million GEL. And in 2018-2020, the share of local e-commerce increased from 11% to 23%. In Georgia, the state is actively engaged in the promotion of activities based on information technologies. Many measures have been taken for this purpose, but compared to other countries, this process is slow in Georgia. The purpose of the study is to determine development prospects for the economy of Georgia based on the analysis of electronic commerce. Research was conducted around the issues using Georgian and foreign scientists' articles, works, reports of international organizations, collections of scientific conferences, and scientific electronic databases. The empirical base of the research is the data and annual reports of the National Statistical Service of Georgia, internet resources of world statistical materials, and others. While working on the article, a questionnaire was developed, based on which an electronic survey of certain types of respondents was conducted. The conducted research was related to determining how intensively Georgian citizens use online shopping, including which age category uses electronic commerce, for what purposes, and how satisfied they are. Various theoretical and methodological research tools, as well as analysis, synthesis, comparison, and other types of methods, are used to achieve the set goal in the research process. The research results and recommendations will contribute to the development of e-commerce in Georgia and economic growth based on it.

Keywords: e-commerce, information technology, pandemic, digital transformation

Procedia PDF Downloads 75
637 Building Information Modelling: A Solution to the Limitations of Prefabricated Construction

Authors: Lucas Peries, Rolla Monib

Abstract:

The construction industry plays a vital role in the global economy, contributing billions of dollars annually. However, the industry has been struggling with persistently low productivity levels for years, unlike other sectors that have shown significant improvements. Modular and prefabricated construction methods have been identified as potential solutions to boost productivity in the construction industry. These methods offer time advantages over traditional construction methods. Despite their potential benefits, modular and prefabricated construction face hindrances and limitations that are not present in traditional building systems. Building information modelling (BIM) has the potential to address some of these hindrances, but barriers are preventing its widespread adoption in the construction industry. This research aims to enhance understanding of the shortcomings of modular and prefabricated building systems and develop BIM-based solutions to alleviate or eliminate these hindrances. The research objectives include identifying and analysing key issues hindering the use of modular and prefabricated building systems, investigating the current state of BIM adoption in the construction industry and factors affecting its successful implementation, proposing BIM-based solutions to address the issues associated with modular and prefabricated building systems, and assessing the effectiveness of the developed solutions in removing barriers to their use. The research methodology involves conducting a critical literature review to identify the key issues and challenges in modular and prefabricated construction and BIM adoption. Additionally, an online questionnaire will be used to collect primary data from construction industry professionals, allowing for feedback and evaluation of the proposed BIM-based solutions. The data collected will be analysed to evaluate the effectiveness of the solutions and their potential impact on the adoption of modular and prefabricated building systems. The main findings of the research indicate that the identified issues from the literature review align with the opinions of industry professionals, and the proposed BIM-based solutions are considered effective in addressing the challenges associated with modular and prefabricated construction. However, the research has limitations, such as a small sample size and the need to assess the feasibility of implementing the proposed solutions. In conclusion, this research contributes to enhancing the understanding of modular and prefabricated building systems' limitations and proposes BIM-based solutions to overcome these limitations. The findings are valuable to construction industry professionals and BIM software developers, providing insights into the challenges and potential solutions for implementing modular and prefabricated construction systems in future projects. Further research should focus on addressing the limitations and assessing the feasibility of implementing the proposed solutions from technical and legal perspectives.

Keywords: building information modelling, modularisation, prefabrication, technology

Procedia PDF Downloads 98
636 Evaluation of an Integrated Supersonic System for Inertial Extraction of CO₂ in Post-Combustion Streams of Fossil Fuel Operating Power Plants

Authors: Zarina Chokparova, Ighor Uzhinsky

Abstract:

Carbon dioxide emissions resulting from burning of the fossil fuels on large scales, such as oil industry or power plants, leads to a plenty of severe implications including global temperature raise, air pollution and other adverse impacts on the environment. Besides some precarious and costly ways for the alleviation of CO₂ emissions detriment in industrial scales (such as liquefaction of CO₂ and its deep-water treatment, application of adsorbents and membranes, which require careful consideration of drawback effects and their mitigation), one physically and commercially available technology for its capture and disposal is supersonic system for inertial extraction of CO₂ in after-combustion streams. Due to the flue gas with a carbon dioxide concentration of 10-15 volume percent being emitted from the combustion system, the waste stream represents a rather diluted condition at low pressure. The supersonic system induces a flue gas mixture stream to expand using a converge-and-diverge operating nozzle; the flow velocity increases to the supersonic ranges resulting in rapid drop of temperature and pressure. Thus, conversion of potential energy into the kinetic power causes a desublimation of CO₂. Solidified carbon dioxide can be sent to the separate vessel for further disposal. The major advantages of the current solution are its economic efficiency, physical stability, and compactness of the system, as well as needlessness of addition any chemical media. However, there are several challenges yet to be regarded to optimize the system: the way for increasing the size of separated CO₂ particles (as they are represented on a micrometers scale of effective diameter), reduction of the concomitant gas separated together with carbon dioxide and provision of CO₂ downstream flow purity. Moreover, determination of thermodynamic conditions of the vapor-solid mixture including specification of the valid and accurate equation of state remains to be an essential goal. Due to high speeds and temperatures reached during the process, the influence of the emitted heat should be considered, and the applicable solution model for the compressible flow need to be determined. In this report, a brief overview of the current technology status will be presented and a program for further evaluation of this approach is going to be proposed.

Keywords: CO₂ sequestration, converging diverging nozzle, fossil fuel power plant emissions, inertial CO₂ extraction, supersonic post-combustion carbon dioxide capture

Procedia PDF Downloads 141
635 Green Synthesis of Nanosilver-Loaded Hydrogel Nanocomposites for Antibacterial Application

Authors: D. Berdous, H. Ferfera-Harrar

Abstract:

Superabsorbent polymers (SAPs) or hydrogels with three-dimensional hydrophilic network structure are high-performance water absorbent and retention materials. The in situ synthesis of metal nanoparticles within polymeric network as antibacterial agents for bio-applications is an approach that takes advantage of the existing free-space into networks, which not only acts as a template for nucleation of nanoparticles, but also provides long term stability and reduces their toxicity by delaying their oxidation and release. In this work, SAP/nanosilver nanocomposites were successfully developed by a unique green process at room temperature, which involves in situ formation of silver nanoparticles (AgNPs) within hydrogels as a template. The aim of this study is to investigate whether these AgNPs-loaded hydrogels are potential candidates for antimicrobial applications. Firstly, the superabsorbents were prepared through radical copolymerization via grafting and crosslinking of acrylamide (AAm) onto chitosan backbone (Cs) using potassium persulfate as initiator and N,N’-methylenebisacrylamide as the crosslinker. Then, they were hydrolyzed to achieve superabsorbents with ampholytic properties and uppermost swelling capacity. Lastly, the AgNPs were biosynthesized and entrapped into hydrogels through a simple, eco-friendly and cost-effective method using aqueous silver nitrate as a silver precursor and curcuma longa tuber-powder extracts as both reducing and stabilizing agent. The formed superabsorbents nanocomposites (Cs-g-PAAm)/AgNPs were characterized by X-ray Diffraction (XRD), UV-visible Spectroscopy, Attenuated Total reflectance Fourier Transform Infrared Spectroscopy (ATR-FTIR), Inductively Coupled Plasma (ICP), and Thermogravimetric Analysis (TGA). Microscopic surface structure analyzed by Transmission Electron Microscopy (TEM) has showed spherical shapes of AgNPs with size in the range of 3-15 nm. The extent of nanosilver loading was decreased by increasing Cs content into network. The silver-loaded hydrogel was thermally more stable than the unloaded dry hydrogel counterpart. The swelling equilibrium degree (Q) and centrifuge retention capacity (CRC) in deionized water were affected by both contents of Cs and the entrapped AgNPs. The nanosilver-embedded hydrogels exhibited antibacterial activity against Escherichia coli and Staphylococcus aureus bacteria. These comprehensive results suggest that the elaborated AgNPs-loaded nanomaterials could be used to produce valuable wound dressing.

Keywords: antibacterial activity, nanocomposites, silver nanoparticles, superabsorbent Hydrogel

Procedia PDF Downloads 246
634 Development of a Bioprocess Technology for the Production of Vibrio midae, a Probiotic for Use in Abalone Aquaculture

Authors: Ghaneshree Moonsamy, Nodumo N. Zulu, Rajesh Lalloo, Suren Singh, Santosh O. Ramchuran

Abstract:

The abalone industry of South Africa is under severe pressure due to illegal harvesting and poaching of this seafood delicacy. These abalones are harvested excessively; as a result, these animals do not have a chance to replace themselves in their habitats, ensuing in a drastic decrease in natural stocks of abalone. Abalone has an extremely slow growth rate and takes approximately four years to reach a size that is market acceptable; therefore, it was imperative to investigate methods to boost the overall growth rate and immunity of the animal. The University of Cape Town (UCT) began to research, which resulted in the isolation of two microorganisms, a yeast isolate Debaryomyces hansenii and a bacterial isolate Vibrio midae, from the gut of the abalone and characterised them for their probiotic abilities. This work resulted in an internationally competitive concept technology that was patented. The next stage of research was to develop a suitable bioprocess to enable commercial production. Numerous steps were taken to develop an efficient production process for V. midae, one of the isolates found by UCT. The initial stages of research involved the development of a stable and robust inoculum and the optimization of physiological growth parameters such as temperature and pH. A range of temperature and pH conditions were evaluated, and data obtained revealed an optimum growth temperature of 30ᵒC and a pH of 6.5. Once these critical growth parameters were established further media optimization studies were performed. Corn steep liquor (CSL) and high test molasses (HTM) were selected as suitable alternatives to more expensive, conventionally used growth medium additives. The optimization of CSL (6.4 g.l⁻¹) and HTM (24 g.l⁻¹) concentrations in the growth medium resulted in a 180% increase in cell concentration, a 5716-fold increase in cell productivity and a 97.2% decrease in the material cost of production in comparison to conventional growth conditions and parameters used at the onset of the study. In addition, a stable market-ready liquid probiotic product, encompassing the viable but not culturable (VBNC) state of Vibrio midae cells, was developed during the downstream processing aspect of the study. The demonstration of this technology at a full manufacturing scale has further enhanced the attractiveness and commercial feasibility of this production process.

Keywords: probiotics, abalone aquaculture, bioprocess technology, manufacturing scale technology development

Procedia PDF Downloads 152
633 Characterization of Aerosol Particles in Ilorin, Nigeria: Ground-Based Measurement Approach

Authors: Razaq A. Olaitan, Ayansina Ayanlade

Abstract:

Understanding aerosol properties is the main goal of global research in order to lower the uncertainty associated with climate change in the trends and magnitude of aerosol particles. In order to identify aerosol particle types, optical properties, and the relationship between aerosol properties and particle concentration between 2019 and 2021, a study conducted in Ilorin, Nigeria, examined the aerosol robotic network's ground-based sun/sky scanning radiometer. The AERONET algorithm version 2 was utilized to retrieve monthly data on aerosol optical depth and angstrom exponent. The version 3 algorithm, which is an almucantar level 2 inversion, was employed to retrieve daily data on single scattering albedo and aerosol size distribution. Excel 2016 was used to analyze the data's monthly, seasonal, and annual mean averages. The distribution of different types of aerosols was analyzed using scatterplots, and the optical properties of the aerosol were investigated using pertinent mathematical theorems. To comprehend the relationships between particle concentration and properties, correlation statistics were employed. Based on the premise that aerosol characteristics must remain constant in both magnitude and trend across time and space, the study's findings indicate that the types of aerosols identified between 2019 and 2021 are as follows: 29.22% urban industrial (UI) aerosol type, 37.08% desert (D) aerosol type, 10.67% biomass burning (BB), and 23.03% urban mix (Um) aerosol type. Convective wind systems, which frequently carry particles as they blow over long distances in the atmosphere, have been responsible for the peak-of-the-columnar aerosol loadings, which were observed during August of the study period. The study has shown that while coarse mode particles dominate, fine particles are increasing in seasonal and annual trends. Burning biomass and human activities in the city are linked to these trends. The study found that the majority of particles are highly absorbing black carbon, with the fine mode having a volume median radius of 0.08 to 0.12 meters. The investigation also revealed that there is a positive coefficient of correlation (r = 0.57) between changes in aerosol particle concentration and changes in aerosol properties. Human activity is rapidly increasing in Ilorin, causing changes in aerosol properties, indicating potential health risks from climate change and human influence on geological and environmental systems.

Keywords: aerosol loading, aerosol types, health risks, optical properties

Procedia PDF Downloads 62
632 Retrieving Iconometric Proportions of South Indian Sculptures Based on Statistical Analysis

Authors: M. Bagavandas

Abstract:

Introduction: South Indian stone sculptures are known for their elegance and history. They are available in large numbers in different monuments situated different parts of South India. These art pieces have been studied using iconography details, but this pioneering study introduces a novel method known as iconometry which is a quantitative study that deals with measurements of different parts of icons to find answers for important unanswered questions. The main aim of this paper is to compare iconometric measurements of the sculptures with canonical proportion to determine whether the sculptors of the past had followed any of the canonical proportions prescribed in the ancient text. If not, this study recovers the proportions used for carving sculptures which is not available to us now. Also, it will be interesting to see how these sculptural proportions of different monuments belonging to different dynasties differ from one another in terms these proportions. Methods and Materials: As Indian sculptures are depicted in different postures, one way of making measurements independent of size, is to decode on a suitable measurement and convert the other measurements as proportions with respect to the chosen measurement. Since in all canonical texts of Indian art, all different measurements are given in terms of face length, it is chosen as the required measurement for standardizing the measurements. In order to compare these facial measurements with measurements prescribed in Indian canons of Iconography, the ten facial measurements like face length, morphological face length, nose length, nose-to-chin length, eye length, lip length, face breadth, nose breadth, eye breadth and lip breadth were standardized using the face length and the number of measurements reduced to nine. Each measurement was divided by the corresponding face length and multiplied by twelve and given in angula unit used in the canonical texts. The reason for multiplying by twelve is that the face length is given as twelve angulas in the canonical texts for all figures. Clustering techniques were used to determine whether the sculptors of the past had followed any of the proportions prescribed in the canonical texts of the past to carve sculptures and also to compare the proportions of sculptures of different monuments. About one hundred twenty-seven stone sculptures from four monuments belonging to the Pallava, the Chola, the Pandya and the Vijayanagar dynasties were taken up for this study. These art pieces belong to a period ranging from the eighth to the sixteenth century A.D. and all of them adorning different monuments situated in different parts of Tamil Nadu State, South India. Anthropometric instruments were used for taking measurements and the author himself had measured all the sample pieces of this study. Result: Statistical analysis of sculptures of different centers of art from different dynasties shows a considerable difference in facial proportions and many of these proportions differ widely from the canonical proportions. The retrieved different facial proportions indicate that the definition of beauty has been changing from period to period and region to region.

Keywords: iconometry, proportions, sculptures, statistics

Procedia PDF Downloads 154
631 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly

Authors: Alex Eldo Simon, Abhishek Yadav

Abstract:

This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.

Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio

Procedia PDF Downloads 81