Search results for: weighted Hardy spaces
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1744

Search results for: weighted Hardy spaces

274 Spatial Structure of First-Order Voronoi for the Future of Roundabout Cairo Since 1867

Authors: Ali Essam El Shazly

Abstract:

The Haussmannization plan of Cairo in 1867 formed a regular network of roundabout spaces, though deteriorated at present. The method of identifying the spatial structure of roundabout Cairo for conservation matches the voronoi diagram with the space syntax through their geometrical property of spatial convexity. In this initiative, the primary convex hull of first-order voronoi adopts the integral and control measurements of space syntax on Cairo’s roundabout generators. The functional essence of royal palaces optimizes the roundabout structure in terms of spatial measurements and the symbolic voronoi projection of 'Tahrir Roundabout' over the Giza Nile and Pyramids. Some roundabouts of major public and commercial landmarks surround the pole of 'Ezbekia Garden' with a higher control than integral measurements, which filter the new spatial structure from the adjacent traditional town. Nevertheless, the least integral and control measures correspond to the voronoi contents of pollutant workshops and the plateau of old Cairo Citadel with the visual compensation of new royal landmarks on top. Meanwhile, the extended suburbs of infinite voronoi polygons arrange high control generators of chateaux housing in 'garden city' environs. The point pattern of roundabouts determines the geometrical characteristics of voronoi polygons. The measured lengths of voronoi edges alternate between the zoned short range at the new poles of Cairo and the distributed structure of longer range. Nevertheless, the shortest range of generator-vertex geometry concentrates at 'Ezbekia Garden' where the crossways of vast Cairo intersect, which maximizes the variety of choice at different spatial resolutions. However, the symbolic 'Hippodrome' which is the largest public landmark forms exclusive geometrical measurements, while structuring a most integrative roundabout to parallel the royal syntax. Overview of the symbolic convex hull of voronoi with space syntax interconnects Parisian Cairo with the spatial chronology of scattered monuments to conceive one universal Cairo structure. Accordingly, the approached methodology of 'voronoi-syntax' prospects the future conservation of roundabout Cairo at the inferred city-level concept.

Keywords: roundabout Cairo, first-order Voronoi, space syntax, spatial structure

Procedia PDF Downloads 482
273 Sponge Urbanism as a Resilient City Design to Overcome Urban Flood Risk, for the Case of Aluva, Kerala, India

Authors: Gayathri Pramod, Sheeja K. P.

Abstract:

Urban flooding has been seen rising in cities for the past few years. This rise in urban flooding is the result of increasing urbanization and increasing climate change. A resilient city design focuses on 'living with water'. This means that the city is capable of accommodating the floodwaters without having to risk any loss of lives or properties. The resilient city design incorporates green infrastructure, river edge treatment, open space design, etc. to form a city that functions as a whole for resilience. Sponge urbanism is a recent method for building resilient cities and is founded by China in 2014. Sponge urbanism is the apt method for resilience building for a tropical town like Aluva of Kerala. Aluva is a tropical town that experiences rainfall of about 783 mm per month during the rainy season. Aluva is an urbanized town which faces the risk of urban flooding and riverine every year due to the presence of Periyar River in the town. Impervious surfaces and hard construction and developments contribute towards flood risk by posing as interference for a natural flow and natural filtration of water into the ground. This type of development is seen in Aluva also. Aluva is designed in this research as a town that have resilient strategies of sponge city and which focusses on natural methods of construction. The flood susceptibility of Aluva is taken into account to design the spaces for sponge urbanism and in turn, reduce the flood susceptibility for the town. Aluva is analyzed, and high-risk zones for development are identified through studies. These zones are designed to withstand the risk of flooding. Various catchment areas are identified according to the natural flow of water, and then these catchment areas are designed to act as a public open space and as detention ponds in case of heavy rainfall. Various development guidelines, according to land use, is also prescribed, which help in increasing the green cover of the town. Aluva is then designed to be a completely flood-adapted city or sponge city according to the guidelines and interventions.

Keywords: climate change, flooding, resilient city, sponge city, sponge urbanism, urbanization

Procedia PDF Downloads 137
272 Meaning Interpretation of Persian Noun-Noun Compounds: A Conceptual Blending Approach

Authors: Bahareh Yousefian, Laurel Smith Stvan

Abstract:

Linguistic structures have two facades: form and meaning. These structures could have either literal meaning or figurative meaning (although it could also depend on the context in which that structure appears). The literal meaning is understandable more easily, but for the figurative meaning, a word or concept is understood from a different word or concept. In linguistic structures with a figurative meaning, it’s more difficult to relate their forms to the meanings than structures with literal meaning. In these cases, the relationship between form and figurative meaning could be studied from different perspectives. Various linguists have been curious about what happens in someone’s mind to understand figurative meaning through the forms; they have used different perspectives and theories to explain this process. It has been studied through cognitive linguistics as well, in which mind and mental activities are really important. In this viewpoint, meaning (in other words, conceptualization) is considered a mental process. In this descriptive-analytic study, 20 Persian compound nouns with figurative meanings have been collected from the Persian-language Moeen Encyclopedic Dictionary and other sources. Examples include [“Sofreh Xaneh”] (traditional restaurant) and [“Dast Yar”] (Assistant). These were studied in a cognitive semantics framework using “Conceptual Blending Theory” which hasn’t been tested on Persian compound nouns before. It was noted that “Conceptual Blending Theory” could lead to the process of understanding the figurative meanings of Persian compound nouns. Many cognitive linguists believe that “Conceptual Blending” is not only a linguistic theory but it’s also a basic human cognitive ability that plays important roles in thought, imagination, and even everyday life as well (though unconsciously). The ability to use mental spaces and conceptual blending (which is exclusive to humankind) is such a basic but unconscious ability that we are unaware of its existence and importance. What differentiates Conceptual Blending Theory from other ways of understanding figurative meaning, are arising new semantic aspects (emergent structure) that lead to a more comprehensive and precise meaning. In this study, it was found that Conceptual Blending Theory could explain reaching the figurative meanings of Persian compound nouns from their forms, such as [talkative for compound word of “Bolbol + Zabani” (nightingale + tongue)] and [wage for compound word of “Dast + Ranj” (hand + suffering)].

Keywords: cognitive linguistics, conceptual blending, figurative meaning, Persian compound nouns

Procedia PDF Downloads 57
271 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking

Authors: Noga Bregman

Abstract:

Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.

Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves

Procedia PDF Downloads 15
270 Comparison of Illuminance Levels in Old Omani and Portuguese Forts in Oman

Authors: Maatouk Khoukhi

Abstract:

Nowadays the reduction of the energy consumed by buildings to achieve mainly the thermal comfort for the occupants represent the main concern for architects and building designers. The common and traditional solution to achieve this target is the design of a highly insulated envelope and reduce the opening and the transparent elements such windows. However, this will lead to the artificial lighting system to consume more energy to compensate the lack of natural lighting coming through the glazed parts of the building envelope. Therefore, a good balance between sufficient daylight and control thermal heat through the building envelope should be considered for energy saving purpose. To achieve a better indoor environment the windows size and spacing including the interior finishing and the location of the partition must be assessed accurately. Daylighting is the controlled admission of natural light into space through windows and transparent elements of the building envelope which helps create a visually stimulating and productive environment for building occupants. The main concern is not to provide enough daylight to an occupied space, but how to achieve this without any undesirable side effect. Indeed, the glare is a major problem in glazed façade buildings, and this could be reduced by using tinted windows. The main target of this research is to investigate the daylight adequacy of functional needs in old Omani Forts and how they have been designed and built to avoid glare and overheating with the appropriate window-to-floor ratio. Because more windows do not automatically result in more daylighting but that is natural light has been controlled and distributed properly throughout the space. Spaces from different Omani and Portuguese Forts under the same climate conditions are considered in order to compare the daylight illuminance levels and examine the similarities and differences in visual attributes between them. The result of this study indicates that lighting preference is not universal and people from different geographical locations are adapted to certain illuminance levels. Therefore, the standards could not be generalized for the entire world. This would be useful to practitioners who are designing to effectively address the diversity of user’s lighting levels preferences in our globally connected society.

Keywords: day lighting, energy, forts, thermal comfort

Procedia PDF Downloads 148
269 Robots for City Life: Design Guidelines and Strategy Recommendations for Introducing Robots in Cities

Authors: Akshay Rege, Lara Gomaa, Maneesh Kumar Verma, Sem Carree

Abstract:

The aim of this paper is to articulate design strategies and recommendations for introducing robots into the city life of people based on experiments conducted with robots and semi-autonomous systems in three cities in the Netherlands. This research was carried out by the Spot robotics team of Impact Lab housed within YES!Delft, a start-up accelerator located in Delft, The Netherlands. The premise of this research is to inform the development of the ‘region of the future’ by the Municipality of Rotterdam-Den Haag (MRDH). The paper starts by reporting the desktop research carried out to find and develop multiple use cases for robots to support humans in various activities. Further, the paper reports the user research carried out by crowdsourcing responses collected in public spaces of Rotterdam-Den Haag region and on the internet. Furthermore, based on the knowledge gathered in the initial research, practical experiments were carried out using robots and semi-autonomous systems in order to test and validate our initial research. These experiments were conducted in three cities in the Netherlands which were Rotterdam, The Hague, and Delft. Custom sensor box, Drone, and Boston Dynamics' Spot robot were used to conduct these experiments. Out of thirty use cases, five were tested with experiments which were skyscraper emergency evacuation, human transportation and security, bike lane delivery, mobility tracking, and robot drama. The learnings from these experiments provided us with insights into human-robot interaction and symbiosis in cities which can be used to introduce robots in cities to support human activities, ultimately enabling the transitioning from a human only city life towards a blended one where robots can play a role. Based on these understandings, we formulated design guidelines and strategy recommendations for incorporating robots in the Rotterdam-Den Haag’s region of the future. Lastly, we discuss how our insights in the Rotterdam-Den Haag region can inspire and inform the incorporation of robots in different cities of the world.

Keywords: city life, design guidelines, human-robot Interaction, robot use cases, robotic experiments, strategy recommendations, user research

Procedia PDF Downloads 79
268 Entrepreneurial Dynamism and Socio-Cultural Context

Authors: Shailaja Thakur

Abstract:

Managerial literature abounds with discussions on business strategies, success stories as well as cases of failure, which provide an indication of the parameters that should be considered in gauging the dynamism of an entrepreneur. Neoclassical economics has reduced entrepreneurship to a mere factor of production, driven solely by the profit motive, thus stripping him of all creativity and restricting his decision making to mechanical calculations. His ‘dynamism’ is gauged simply by the amount of profits he earns, marginalizing any discussion on the means that he employs to attain this objective. With theoretical backing, we have developed an Index of Entrepreneurial Dynamism (IED) giving weights to the different moves that the entrepreneur makes during his business journey. Strategies such as changes in product lines, markets and technology are gauged as very important (weighting of 4); while adaptations in terms of technology, raw materials used, upgradations in skill set are given a slightly lesser weight of 3. Use of formal market analysis, diversification in related products are considered moderately important (weight of 2) and being a first generation entrepreneur, employing managers and having plans to diversify are taken to be only slightly important business strategies (weight of 1). The maximum that an entrepreneur can score on this index is 53. A semi-structured questionnaire is employed to solicit the responses from the entrepreneurs on the various strategies that have been employed by them during the course of their business. Binary as well as graded responses are obtained, weighted and summed up to give the IED. This index was tested on about 150 tribal entrepreneurs in Mizoram, a state of India and was found to be highly effective in gauging their dynamism. This index has universal acceptability but is devoid of the socio-cultural context, which is very central to the success and performance of the entrepreneurs. We hypothesize that a society that respects risk taking takes failures in its stride, glorifies entrepreneurial role models, promotes merit and achievement is one that has a conducive socio- cultural environment for entrepreneurship. For obtaining an idea about the social acceptability, we are putting forth questions related to the social acceptability of business to another set of respondents from different walks of life- bureaucracy, academia, and other professional fields. Similar weighting technique is employed, and index is generated. This index is used for discounting the IED of the respondent entrepreneurs from that region/ society. This methodology is being tested for a sample of entrepreneurs from two very different socio- cultural milieus- a tribal society and a ‘mainstream’ society- with the hypothesis that the entrepreneurs in the tribal milieu might be showing a higher level of dynamism than their counterparts in other regions. An entrepreneur who scores high on IED and belongs to society and culture that holds entrepreneurship in high esteem, might not be in reality as dynamic as a person who shows similar dynamism in a relatively discouraging or even an outright hostile environment.

Keywords: index of entrepreneurial dynamism, India, social acceptability, tribal entrepreneurs

Procedia PDF Downloads 233
267 Analysis of Anti-Tuberculosis Immune Response Induced in Lungs by Intranasal Immunization with Mycobacterium indicus pranii

Authors: Ananya Gupta, Sangeeta Bhaskar

Abstract:

Mycobacterium indicus pranii (MIP) is a saprophytic mycobacterium. It is a predecessor of M. avium complex (MAC). Whole genome analysis and growth kinetics studies have placed MIP in between pathogenic and non-pathogenic species. It shares significant antigenic repertoire with M. tuberculosis and have unique immunomodulatory properties. MIP provides better protection than BCG against pulmonary tuberculosis in animal models. Immunization with MIP by aerosol route provides significantly higher protection as compared to immunization by subcutaneous (s.c.) route. However, mechanism behind differential protection has not been studied. In this study, using mice model we have evaluated and compared the M.tb specific immune response in lung compartments (airway lumen / lung interstitium) as well as spleen following MIP immunization via nasal (i.n.) and s.c. route. MIP i.n. vaccination resulted in increased seeding of memory T cells (CD4+ and CD8+ T-cells) in the airway lumen. Frequency of CD4+ T cells expressing Th1 migratory marker (CXCR3) and activation marker (CD69) were also high in airway lumen of MIP i.n. group. Significantly high ex vivo secretion of cytokines- IFN-, IL-12, IL-17 and TNF- from cells of airway luminal spaces provides evidence of antigen-specific lung immune response, besides generating systemic immunity comparable to MIP s.c. group. Analysis of T cell response on per cell basis revealed that antigen specific T-cells of MIP i.n. group were functionally superior as higher percentage of these cells simultaneously secreted IFN-gamma, IL-2 and TNF-alpha cytokines as compared to MIP s.c. group. T-cells secreting more than one of the cytokines simultaneously are believed to have robust effector response and crucial for protection, compared with single cytokine secreting T-cells. Adoptive transfer of airway luminal T-cells from MIP i.n. group into trachea of naive B6 mice revealed that MIP induced CD8 T-cells play crucial role in providing long term protection. Thus the study demonstrates that MIP intranasal vaccination induces M.tb specific memory T-cells in the airway lumen that results in an early and robust recall response against M.tb infection.

Keywords: airway lumen, Mycobacterium indicus pranii, Th1 migratory markers, vaccination

Procedia PDF Downloads 174
266 Ibrutinib and the Potential Risk of Cardiac Failure: A Review of Pharmacovigilance Data

Authors: Abdulaziz Alakeel, Roaa Alamri, Abdulrahman Alomair, Mohammed Fouda

Abstract:

Introduction: Ibrutinib is a selective, potent, and irreversible small-molecule inhibitor of Bruton's tyrosine kinase (BTK). It forms a covalent bond with a cysteine residue (CYS-481) at the active site of Btk, leading to inhibition of Btk enzymatic activity. The drug is indicated to treat certain type of cancers such as mantle cell lymphoma (MCL), chronic lymphocytic leukaemia and Waldenström's macroglobulinaemia (WM). Cardiac failure is a condition referred to inability of heart muscle to pump adequate blood to human body organs. There are multiple types of cardiac failure including left and right-sided heart failure, systolic and diastolic heart failures. The aim of this review is to evaluate the risk of cardiac failure associated with the use of ibrutinib and to suggest regulatory recommendations if required. Methodology: Signal Detection team at the National Pharmacovigilance Center (NPC) of Saudi Food and Drug Authority (SFDA) performed a comprehensive signal review using its national database as well as the World Health Organization (WHO) database (VigiBase), to retrieve related information for assessing the causality between cardiac failure and ibrutinib. We used the WHO- Uppsala Monitoring Centre (UMC) criteria as standard for assessing the causality of the reported cases. Results: Case Review: The number of resulted cases for the combined drug/adverse drug reaction are 212 global ICSRs as of July 2020. The reviewers have selected and assessed the causality for the well-documented ICSRs with completeness scores of 0.9 and above (35 ICSRs); the value 1.0 presents the highest score for best-written ICSRs. Among the reviewed cases, more than half of them provides supportive association (four probable and 15 possible cases). Data Mining: The disproportionality of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by WHO-UMC to measure the reporting ratio. Positive IC reflects higher statistical association while negative values indicates less statistical association, considering the null value equal to zero. The results of (IC=1.5) revealed a positive statistical association for the drug/ADR combination, which means “Ibrutinib” with “Cardiac Failure” have been observed more than expected when compared to other medications available in WHO database. Conclusion: Health regulators and health care professionals must be aware for the potential risk of cardiac failure associated with ibrutinib and the monitoring of any signs or symptoms in treated patients is essential. The weighted cumulative evidences identified from causality assessment of the reported cases and data mining are sufficient to support a causal association between ibrutinib and cardiac failure.

Keywords: cardiac failure, drug safety, ibrutinib, pharmacovigilance, signal detection

Procedia PDF Downloads 108
265 Technology Management for Early Stage Technologies

Authors: Ming Zhou, Taeho Park

Abstract:

Early stage technologies have been particularly challenging to manage due to high degrees of their numerous uncertainties. Most research results directly out of a research lab tend to be at their early, if not the infant stage. A long while uncertain commercialization process awaits these lab results. The majority of such lab technologies go nowhere and never get commercialized due to various reasons. Any efforts or financial resources put into managing these technologies turn fruitless. High stake naturally calls for better results, which make a patenting decision harder to make. A good and well protected patent goes a long way for commercialization of the technology. Our preliminary research showed that there was not a simple yet productive procedure for such valuation. Most of the studies now have been theoretical and overly comprehensive where practical suggestions were non-existent. Hence, we attempted to develop a simple and highly implementable procedure for efficient and scalable valuation. We thoroughly reviewed existing research, interviewed practitioners in the Silicon Valley area, and surveyed university technology offices. Instead of presenting another theoretical and exhaustive research, we aimed at developing a practical guidance that a government agency and/or university office could easily deploy and get things moving to later steps of managing early stage technologies. We provided a procedure to thriftily value and make the patenting decision. A patenting index was developed using survey data and expert opinions. We identified the most important factors to be used in the patenting decision using survey ratings. The rating then assisted us in generating good relative weights for the later scoring and weighted averaging step. More importantly, we validated our procedure by testing it with our practitioner contacts. Their inputs produced a general yet highly practical cut schedule. Such schedule of realistic practices has yet to be witnessed our current research. Although a technology office may choose to deviate from our cuts, what we offered here at least provided a simple and meaningful starting point. This procedure was welcomed by practitioners in our expert panel and university officers in our interview group. This research contributed to our current understanding and practices of managing early stage technologies by instating a heuristically simple yet theoretical solid method for the patenting decision. Our findings generated top decision factors, decision processes and decision thresholds of key parameters. This research offered a more practical perspective which further completed our extant knowledge. Our results could be impacted by our sample size and even biased a bit by our focus on the Silicon Valley area. Future research, blessed with bigger data size and more insights, may want to further train and validate our parameter values in order to obtain more consistent results and analyze our decision factors for different industries.

Keywords: technology management, early stage technology, patent, decision

Procedia PDF Downloads 332
264 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study

Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming

Abstract:

Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.

Keywords: binary outcomes, statistical methods, clinical trials, simulation study

Procedia PDF Downloads 97
263 Histological and Morphometric Studies of the Liver of Goats Aborted

Authors: Toumi Farah, Charallah Salima

Abstract:

In the Algerian Sahara, goat farming is predominant, and it’s associated with other types of breeding, particularly camel and sheep; it also constitutes a significant proportion of breeding exclusively goat. This Saharan goat is a small ruminant with a black dress with white’s spots, hanging ears, and a coat more or less long. It is known for its hardiness and resistance to adverse conditions of arid zones and its perfect ecophysiological adaptation to harsh environmental conditions. However, pregnancy alterations, particularly abortion, degrade its productivity and cause economic losses, having both direct and indirect effects on animal production, like the costs of veterinary interventions and the reconstitution of livestock. The purpose of this work is to study the histological aspect of the liver of goats’ aborted living under nomadic herds in the region of Béni-Abbès (30° 7' N, 2° 10 'O). The organs were collected in physiological serum, rinsed, and then fixed with formaldehyde (37°, diluted at 10%). After that, these samples were processed for a topographic study. The morphometric study of the liver was performed by using an image analysis and processing software "Image J"; the various measurements obtained are intended to specify the supposed stage of development according to the body weight. The histological structure of the liver shows that the hepatic parenchyma consists of vascular conjunctive spaces surrounded by Glisson’s capsule. The sinusoids and hepatic portal vein are full of red blood cells, representing sinusoidal congestion and a thrombosed vein. At high magnification, the blood vessels show the presence of vascular thrombosis and haemorrhage in some areas of the hepatic parenchyma. Morphometric analysis shows that the number of liver parenchymal cells and the diameter of liver vessels vary according to the stage of development. The results obtained will provide details of the anatomical and cellular elements that can be used in the diagnosis of early or late abortion and late embryonic death. It would be interesting to find, by immunohistochemistry, some inflammatory markers useful for monitoring the progress of pregnancy and bioindicators of fetomaternal distress.

Keywords: aborting goat, arid zone, liver, histopathology

Procedia PDF Downloads 80
262 Shedding Light on Colorism: Exploring Stereotypes, Influential Factors, and Consequences in African American Communities

Authors: India Sanders, Jeffrey Sherman

Abstract:

Colorism has been a persistent and ingrained issue in the history of the United States, with far-reaching consequences that continue to affect various aspects of daily life, institutional policies, public spaces, economic structures, and social norms. This complex problem has had a particularly profound impact on the African-American community, shaping how they are perceived and treated within society at large. The prevalence of negative stereotypes surrounding African Americans can lead to severe repercussions such as discrimination and mental health disparities. The effects of such biases can also materialize in diverse forms, impacting the well-being and livelihoods of individuals within this community. Current research has examined how people from different racial groups perceive different skin tones of Black people, looking at the cognitive processes that manifest through categorization and stereotypes. Additionally, studies observed consequences related to colorism and how it directly affects those with darker versus lighter skin tones. However, not much research has been conducted on the influence of stereotypes associated with various skin tones. In the present study, it is hypothesized that participants in Group A will rate positive stereotypes associated with lighter skin tones significantly higher than positive stereotypes associated with darker skin tones. It is also hypothesized that participants in Group B will rate negative stereotypes associated with darker skin tones significantly higher than negative stereotypes associated with lighter skin tones. For this study, a quantitative study on stereotypes of skin tone representation within the African-American community will be conducted. Participants will rate the accuracy of various visual representations within mass media of African Americans with light skin tones and dark skin tones using a Likert scale. Participants will also be provided a questionnaire further examining the perception of stereotypes and how this affects their interactions with African Americans with lighter versus darker skin tones. The purpose of this study is to investigate the impact of skin tone portrayals on African Americans, including associated stereotypes and societal perceptions. It is expected that participants will more likely associate negative stereotypes with African Americans who have darker skin tones, as this is a common and reinforced viewpoint in the cultural and social system.

Keywords: colorism, discrimination, racism, stereotype

Procedia PDF Downloads 49
261 The Properties of Risk-based Approaches to Asset Allocation Using Combined Metrics of Portfolio Volatility and Kurtosis: Theoretical and Empirical Analysis

Authors: Maria Debora Braga, Luigi Riso, Maria Grazia Zoia

Abstract:

Risk-based approaches to asset allocation are portfolio construction methods that do not rely on the input of expected returns for the asset classes in the investment universe and only use risk information. They include the Minimum Variance Strategy (MV strategy), the traditional (volatility-based) Risk Parity Strategy (SRP strategy), the Most Diversified Portfolio Strategy (MDP strategy) and, for many, the Equally Weighted Strategy (EW strategy). All the mentioned approaches were based on portfolio volatility as a reference risk measure but in 2023, the Kurtosis-based Risk Parity strategy (KRP strategy) and the Minimum Kurtosis strategy (MK strategy) were introduced. Understandably, they used the fourth root of the portfolio-fourth moment as a proxy for portfolio kurtosis to work with a homogeneous function of degree one. This paper contributes mainly theoretically and methodologically to the framework of risk-based asset allocation approaches with two steps forward. First, a new and more flexible objective function considering a linear combination (with positive coefficients that sum to one) of portfolio volatility and portfolio kurtosis is used to alternatively serve a risk minimization goal or a homogeneous risk distribution goal. Hence, the new basic idea consists in extending the achievement of typical risk-based approaches’ goals to a combined risk measure. To give the rationale behind operating with such a risk measure, it is worth remembering that volatility and kurtosis are expressions of uncertainty, to be read as dispersion of returns around the mean and that both preserve adherence to a symmetric framework and consideration for the entire returns distribution as well, but also that they differ from each other in that the former captures the “normal” / “ordinary” dispersion of returns, while the latter is able to catch the huge dispersion. Therefore, the combined risk metric that uses two individual metrics focused on the same phenomena but differently sensitive to its intensity allows the asset manager to express, in the context of an objective function by varying the “relevance coefficient” associated with the individual metrics, alternatively, a wide set of plausible investment goals for the portfolio construction process while serving investors differently concerned with tail risk and traditional risk. Since this is the first study that also implements risk-based approaches using a combined risk measure, it becomes of fundamental importance to investigate the portfolio effects triggered by this innovation. The paper also offers a second contribution. Until the recent advent of the MK strategy and the KRP strategy, efforts to highlight interesting properties of risk-based approaches were inevitably directed towards the traditional MV strategy and SRP strategy. Previous literature established an increasing order in terms of portfolio volatility, starting from the MV strategy, through the SRP strategy, arriving at the EQ strategy and provided the mathematical proof for the “equalization effect” concerning marginal risks when the MV strategy is considered, and concerning risk contributions when the SRP strategy is considered. Regarding the validity of similar conclusions when referring to the MK strategy and KRP strategy, the development of a theoretical demonstration is still pending. This paper fills this gap.

Keywords: risk parity, portfolio kurtosis, risk diversification, asset allocation

Procedia PDF Downloads 48
260 Feasibility of Two Positive-Energy Schools in a Hot-Humid Tropical Climate: A Methodological Approach

Authors: Shashwat, Sandra G. L. Persiani, Yew Wah Wong, Pramod S. Kamath, Avinash H. Anantharam, Hui Ling Aw, Yann Grynberg

Abstract:

Achieving zero-energy targets in existing buildings is known to be a difficult task, hence targets are addressed at new buildings almost exclusively. Although these ultra-efficient case-studies remain essential to develop future technologies and drive the concepts of Zero-energy, the immediate need to cut the consumption of the existing building stock remains unaddressed. This work aims to present a reliable and straightforward methodology for assessing the potential of energy-efficient upgrading in existing buildings. Public Singaporean school buildings, characterized by low energy use intensity and large roof areas, were identified as potential objects for conversion to highly-efficient buildings with a positive energy balance. A first study phase included the development of a detailed energy model for two case studies (a primary and a secondary school), based on the architectural drawings provided, site-visits and calibrated using measured end-use power consumption of different spaces. The energy model was used to demonstrate compliances or predict energy consumption of proposed changes in the two buildings. As complete energy monitoring is difficult and substantially time-consuming, short-term energy data was collected in the schools by taking spot measurements of power, voltage, and current for all the blocks of school. The figures revealed that the bulk of the consumption is attributed in decreasing order of magnitude to air-conditioning, plug loads, and lighting. In a second study-phase, a number of energy-efficient technologies and strategies were evaluated through energy-modeling to identify the alternatives giving the highest energy saving potential, achieving a reduction in energy use intensity down to 19.71 kWh/m²/y and 28.46 kWh/m²/y for the primary and the secondary schools respectively. This exercise of field evaluation and computer simulation of energy saving potential aims at a preliminary assessment of the positive-energy feasibility enabling future implementation of the technologies on the buildings studied, in anticipation of a broader and more widespread adoption in Singaporean schools.

Keywords: energy simulation, school building, tropical climate, zero energy buildings, positive energy

Procedia PDF Downloads 128
259 A Failure to Strike a Balance: The Use of Parental Mediation Strategies by Foster Carers and Social Workers

Authors: Jennifer E Simpson

Abstract:

Background and purpose: The ubiquitous use of the Internet and social media by children and young people has had a dual effect. The first is to open a world of possibilities and promise that is characterized by the ability to consume and create content, connect with friends, explore and experiment. The second relates to risks such as unsolicited requests, sexual exploitation, cyberbullying and commercial exploitation. This duality poses significant difficulties for a generation of foster carers and social workers who have no childhood experience to draw on in terms of growing up using the Internet, social media and digital devices. This presentation is concerned with the findings of a small qualitative study about the use of digital devices and the Internet by care-experienced young people to stay in touch with their families and the way this was managed by foster carers and social workers using specific parental mediation strategies. The findings highlight that restrictive strategies were used by foster carers and endorsed by social workers. An argument is made for an approach that develops a series of balanced solutions that move foster carers from such restrictive approaches to those that are grounded in co-use and are interpretive in nature. Methods: Using a purposive sampling strategy, 12 triads consisting of care-experienced young people (aged 13-18 years), their foster carers and allocated social workers were recruited. All respondents undertook a semi-structured interview, with the young people detailing what social media apps and other devices they used to contact their families via an Ecomap. The foster carers and social workers shared details of the methods and approaches they used to manage digital devices and the Internet in general. Data analysis was performed using a Framework analytic method to explore the various attitudes, as well as complementary and contradictory perspectives of the young people, their foster carers and allocated social workers. Findings: The majority of foster carers made use of parental mediation strategies that erred on the side of typologies that included setting rules and regulations (restrictive), ad-hoc checking of a young person’s behavior and device (monitoring), and software used to limit or block access to inappropriate websites (technical). It was noted that minimal use was made by foster carers of parental mediation strategies that included talking about content (active/interpretive) or sharing Internet activities (co-use). Amongst the majority of the social workers, they also had a strong preference for restrictive approaches. Conclusions and implications: Trepidations on the part of both foster carers and social workers about the use of digital devices and the Internet meant that the parental strategies used were weighted more towards restriction, with little use made of approaches such as co-use and interpretative. This lack of balance calls for solutions that are grounded in co-use and an interpretive approach, both of which can be achieved through training and support, as well as wider policy change.

Keywords: parental mediation strategies, risk, children in state care, online safety

Procedia PDF Downloads 53
258 Women’s Colours in Digital Innovation

Authors: Daniel J. Patricio Jiménez

Abstract:

Digital reality demands new ways of thinking, flexibility in learning, acquisition of new competencies, visualizing reality under new approaches, generating open spaces, understanding dimensions in continuous change, etc. We need inclusive growth, where colors are not lacking, where lights do not give a distorted reality, where science is not half-truth. In carrying out this study, the documentary or bibliographic collection has been taken into account, providing a reflective and analytical analysis of current reality. In this context, deductive and inductive methods have been used on different multidisciplinary information sources. Women today and tomorrow are a strategic element in science and arts, which, under the umbrella of sustainability, implies ‘meeting current needs without detriment to future generations’. We must build new scenarios, which qualify ‘the feminine and the masculine’ as an inseparable whole, encouraging cooperative behavior; nothing is exclusive or excluding, and that is where true respect for diversity must be based. We are all part of an ecosystem, which we will make better as long as there is a real balance in terms of gender. It is the time of ‘the lifting of the veil’, in other words, it is the time to discover the pseudonyms, the women who painted, wrote, investigated, recorded advances, etc. However, the current reality demands much more; we must remove doors where they are not needed. Mass processing of data, big data, needs to incorporate algorithms under the perspective of ‘the feminine’. However, most STEM students (science, technology, engineering, and math) are men. Our way of doing science is biased, focused on honors and short-term results to the detriment of sustainability. Historically, the canons of beauty, the way of looking, of perceiving, of feeling, depended on the circumstances and interests of each moment, and women had no voice in this. Parallel to science, there is an under-representation of women in the arts, but not so much in the universities, but when we look at galleries, museums, art dealers, etc., colours impoverish the gaze and once again highlight the gender gap and the silence of the feminine. Art registers sensations by divining the future, science will turn them into reality. The uniqueness of the so-called new normality requires women to be protagonists both in new forms of emotion and thought, and in the experimentation and development of new models. This will result in women playing a decisive role in the so-called "5.0 society" or, in other words, in a more sustainable, more humane world.

Keywords: art, digitalization, gender, science

Procedia PDF Downloads 152
257 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm

Authors: G. Singer, M. Golan

Abstract:

Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.

Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension

Procedia PDF Downloads 91
256 Negotiating Autonomy in Women’s Political Participation: The Case of Elected Women’s Representatives from Jharkhand

Authors: Rajeshwari Balasubramanian, Margit Van Wessel, Nandini Deo

Abstract:

The participation of women in local bodies witnessed a rise after the implementation of 73rd and 74th Amendments to the Indian Constitution which created quotas for women representatives. However, even when participation increased, it did not translate into meaningful contributions by women in local bodies. This led some civil society organisations (CSOs) to begin working with women panchayat representatives in various states to build their capacity for political participation. The focus of this paper is to study capacity building training by CSOs in Jharkhand. The paper maps how the training helps women elected representatives to negotiate their autonomy at multiple levels. The paper describes the capacity building program conducted by an international feminist organisation along with its seven local partners in Jharkhand. The central question that the study asks is: How does capacity building training by CSOs in Jharkhand impact the autonomy of elected women representatives? It uses a qualitative research methodology based on empirical data gathered through field visits in four districts of Jharkhand (Chatra, Hazaribagh, East Singhbum and Ranchi) where the program was implemented for three years. The study found that women elected representatives had to develop strategies to negotiate their choice to move out of their homes and attend the training conducted by CSOs. The ability to participate in the training programs itself was a significant achievement of personal autonomy for many women. The training provided them a platform to voice their opinion and appreciate their own value as panchayat leaders. This realization allowed them to negotiate their presence and a space for themselves in Gram panchayats. A Foucauldian approach to analyze capacity building workshops might lead us to see them as systems in which CSOs impose a form of governmentality on rural elected representatives. Instead, what we see here is a much more complex negotiation of agency in which the CSO creates spaces and practices that allow women to achieve their own forms of autonomy. The study concludes that the impact of the training on the autonomy of these women is based on their everyday negotiations of time, space and mobility. Autonomy for these elected women representatives is also contextual and relative, as they seem to realize it during the training process. The training allows the women to not only negotiate their participation in panchayats but also challenge everyday practices that are rooted in patriarchy.

Keywords: autonomy, feminist organization, local bodies, political participation

Procedia PDF Downloads 131
255 Offloading Knowledge-Keeping to Digital Technology and the Attrition of Socio-Cultural Life

Authors: Sophia Melanson Ricciardone

Abstract:

Common vexations concerning the impact of contemporary media technology on our daily lives tend to conjure mental representations of digital specters that surreptitiously invade the privacy of our most intimate spaces. While legitimacy assuredly sustains these concerns, examining them in isolation from other attributable phenomena to the problems created by our hyper-mediated conditions does not supply a complete account of the deleterious cost of integrating digital affordances into the banal cadence of our shared socio-cultural realities. As we continue to subconsciously delegate facets of our social and cognitive lives to digital technology, the very faculties that have enabled our species to thrive and invent technology in the first place are at risk of attrition – namely our capacity to sustain attention while synthesizing information in working memory to produce creative and inventive constructions for our shared social existence. Though the offloading of knowledge-keeping to fellow social agents belonging to our family and community circles is an enduring intuitive phenomenon across human societies – what social psychologists refer to as transactive memory – in offloading our various socio-cognitive faculties to digital technology, we may plausibly be supplanting the visceral social connections forged by transactive memory. This paper will present related research and literature produced across the disciplines of sociobiology, socio-cultural anthropology, social psychology, cognitive semiotics and communication and media studies that directly and indirectly address the social precarity cultivated by digital technologies. This body of scholarly work will then be situated within common areas of interest belonging to digital anthropology, including the groundbreaking work of Pavel Curtis, Christopher Kelty, Lynn Cherny, Vincent Duclos, Nick Seaver, and Sherry Turkle. It is anticipated that in harmonizing these overlapping areas of intradisciplinary interest, this paper can weave together the disparate connections across spheres of knowledge that help delineate the conditions of our contemporary digital existence.

Keywords: cognition, digital media, knowledge keeping, transactive memory

Procedia PDF Downloads 116
254 Traditional and New Residential Architecture in the Approach of Sustainability in the Countryside after the Earthquake

Authors: Zeynep Tanriverdi̇

Abstract:

Sustainable architecture is a design approach that provides healthy, comfortable, safe, clean space production as well as utilizes minimum resources for efficient and economical use of natural resources and energy. Traditional houses located in rural areas are sustainable structures built at the design and implementation stage in accordance with the climatic environmental data of the region and also effectively using natural energy resources. The fact that these structures are located in an earthquake geography like Türkiye brings their earthquake resistance to the agenda. Since the construction of these structures, which contain the architectural and technological cultural knowledge of the past, is shaped according to the characteristics of the regions where they are located, their resistance to earthquakes also differs. Analyses in rural areas after the earthquake show that there are light-damaged structures that can survive, severely damaged structures, and completely destroyed structures. In this regard, experts can implement repair, consolidation, and reconstruction applications, respectively. While simple repair interventions are carried out in accordance with the original data in traditional houses that have shown great resistance to earthquakes, reinforcement work blended with new technologies can be applied in damaged structures. In reconstruction work, a wide variety of applications can be seen with the possibilities of modern technologies. In rural areas experiencing earthquakes around the world, there are experimental new housing applications that are renewable, environmentally friendly, and sustainable with modern construction techniques in the light of scientific data. With these new residences, it is aimed to create earthquake-resistant, economical, healthy, and pain-relieving therapy spaces for people whose daily lives have been interrupted by disasters. In this study, the preservation of high earthquake-prone rural areas will be discussed through the knowledge transfer of traditional architecture and also permanent housing practices using new sustainable technologies to improve the area. In this way, it will be possible to keep losses to a minimum with sustainable, reliable applications prepared for the worst aspects of the disaster situation and to establish a link between the knowledge of the past and the new technologies of the future.

Keywords: sustainability, conservation, traditional construction systems and materials, new technologies, earthquake resistance

Procedia PDF Downloads 46
253 The Human Process of Trust in Automated Decisions and Algorithmic Explainability as a Fundamental Right in the Exercise of Brazilian Citizenship

Authors: Paloma Mendes Saldanha

Abstract:

Access to information is a prerequisite for democracy while also guiding the material construction of fundamental rights. The exercise of citizenship requires knowing, understanding, questioning, advocating for, and securing rights and responsibilities. In other words, it goes beyond mere active electoral participation and materializes through awareness and the struggle for rights and responsibilities in the various spaces occupied by the population in their daily lives. In times of hyper-cultural connectivity, active citizenship is shaped through ethical trust processes, most often established between humans and algorithms. Automated decisions, so prevalent in various everyday situations, such as purchase preference predictions, virtual voice assistants, reduction of accidents in autonomous vehicles, content removal, resume selection, etc., have already found their place as a normalized discourse that sometimes does not reveal or make clear what violations of fundamental rights may occur when algorithmic explainability is lacking. In other words, technological and market development promotes a normalization for the use of automated decisions while silencing possible restrictions and/or breaches of rights through a culturally modeled, unethical, and unexplained trust process, which hinders the possibility of the right to a healthy, transparent, and complete exercise of citizenship. In this context, the article aims to identify the violations caused by the absence of algorithmic explainability in the exercise of citizenship through the construction of an unethical and silent trust process between humans and algorithms in automated decisions. As a result, it is expected to find violations of constitutionally protected rights such as privacy, data protection, and transparency, as well as the stipulation of algorithmic explainability as a fundamental right in the exercise of Brazilian citizenship in the era of virtualization, facing a threefold foundation called trust: culture, rules, and systems. To do so, the author will use a bibliographic review in the legal and information technology fields, as well as the analysis of legal and official documents, including national documents such as the Brazilian Federal Constitution, as well as international guidelines and resolutions that address the topic in a specific and necessary manner for appropriate regulation based on a sustainable trust process for a hyperconnected world.

Keywords: artificial intelligence, ethics, citizenship, trust

Procedia PDF Downloads 46
252 Evidence of Behavioural Thermoregulation by Dugongs (Dugong dugon) at the High Latitude Limit to Their Range in Eastern Australia

Authors: Daniel R. Zeh, Michelle R. Heupel, Mark Hamann, Rhondda Jones, Colin J. Limpus, Helene Marsh

Abstract:

Marine mammals live in an environment with water temperatures nearly always lower than the mammalian core body temperature of 35 - 38°C. Marine mammals can lose heat at high rates and have evolved a range of adaptations to minimise heat loss. Our project tracked dugongs to examine if there was a discoverable relationship between the animals’ movements and the temperature of their environment that might suggest behavioural thermoregulation. Twenty-nine dugongs were fitted with acoustic and satellite/GPS transmitters in 2012, 2013 and 2014 in Moreton Bay Queensland at the high latitude limit of the species’ winter range in eastern Australia on 30 occasions (one animal was tagged twice). All 22 animals that stayed in the area and had functional transmitters made at least one (and up to 66) return trip(s) to the warmer oceanic waters outside the bay where seagrass is unavailable. Individual dugongs went in and out of the bay in synchrony with the tides and typically spent about 6 hours in the oceanic water. There was a diel pattern in the movements: 85% of outgoing trips occurred between midnight and noon. There were significant individual differences, but the likelihood of a dugong leaving the bay was independent of body length or sex. In Quarter 2 (April – June), the odds of a dugong making a trip increased by about 40% for each 1°C increase in the temperature difference between the bay and the warmer adjacent oceanic waters. In Quarter 3, the odds of making a trip were lower when the outside –inside bay temperature differences were small or negative but increased by a factor of up to 2.12 for each 1°C difference in outside – inside temperatures. In Quarter 4, the odds of making a trip were higher when it was cooler outside the bay and decreased by a factor of nearly 0.5 for each 1°C difference in outside – inside bay temperatures. The activity spaces of the dugongs generally declined as winter progressed suggesting a change in the cost-effectiveness of moving outside the bay. Our analysis suggests that dugongs can thermoregulate their core temperature through the behaviour of moving to water having more favourable temperature.

Keywords: acoustic, behavioral thermoregulation, dugongs, movements, satellite, telemetry, quick fix GPS

Procedia PDF Downloads 160
251 R&D Diffusion and Productivity in a Globalized World: Country Capabilities in an MRIO Framework

Authors: S. Jimenez, R.Duarte, J.Sanchez-Choliz, I. Villanua

Abstract:

There is a certain consensus in economic literature about the factors that have influenced in historical differences in growth rates observed between developed and developing countries. However, it is less clear what elements have marked different paths of growth in developed economies in recent decades. R&D has always been seen as one of the major sources of technological progress, and productivity growth, which is directly influenced by technological developments. Following recent literature, we can say that ‘innovation pushes the technological frontier forward’ as well as encourage future innovation through the creation of externalities. In other words, productivity benefits from innovation are not fully appropriated by innovators, but it also spread through the rest of the economies encouraging absorptive capacities, what have become especially important in a context of increasing fragmentation of production This paper aims to contribute to this literature in two ways, first, exploring alternative indexes of R&D flows embodied in inter-country, inter-sectorial flows of good and services (as approximation to technology spillovers) capturing structural and technological characteristic of countries and, second, analyzing the impact of direct and embodied R&D on the evolution of labor productivity at the country/sector level in recent decades. The traditional way of calculation through a multiregional input-output framework assumes that all countries have the same capabilities to absorb technology, but it is not, each one has different structural features and, this implies, different capabilities as part of literature, claim. In order to capture these differences, we propose to use a weight based on specialization structure indexes; one related with the specialization of countries in high-tech sectors and the other one based on a dispersion index. We propose these two measures because, as far as we understood, country capabilities can be captured through different ways; countries specialization in knowledge-intensive sectors, such as Chemicals or Electrical Equipment, or an intermediate technology effort across different sectors. Results suggest the increasing importance of country capabilities while increasing the trade openness. Besides, if we focus in the country rankings, we can observe that with high-tech weighted R&D embodied countries as China, Taiwan and Germany arose the top five despite not having the highest intensities of R&D expenditure, showing the importance of country capabilities. Additionally, through a fixed effects panel data model we show that, in fact, R&D embodied is important to explain labor productivity increases, in fact, even more that direct R&D investments. This is reflecting that globalization is more important than has been said until now. However, it is true that almost all analysis done in relation with that consider the effect of t-1 direct R&D intensity over economic growth. Nevertheless, from our point of view R&D evolve as a delayed flow and it is necessary some time to be able to see its effects on the economy, as some authors have already claimed. Our estimations tend to corroborate this hypothesis obtaining a gap between 4-5 years.

Keywords: economic growth, embodied, input-output, technology

Procedia PDF Downloads 110
250 Monitoring the Effect of Doxorubicin Liposomal in VX2 Tumor Using Magnetic Resonance Imaging

Authors: Ren-Jy Ben, Jo-Chi Jao, Chiu-Ya Liao, Ya-Ru Tsai, Lain-Chyr Hwang, Po-Chou Chen

Abstract:

Cancer is still one of the serious diseases threatening the lives of human beings. How to have an early diagnosis and effective treatment for tumors is a very important issue. The animal carcinoma model can provide a simulation tool for the study of pathogenesis, biological characteristics and therapeutic effects. Recently, drug delivery systems have been rapidly developed to effectively improve the therapeutic effects. Liposome plays an increasingly important role in clinical diagnosis and therapy for delivering a pharmaceutic or contrast agent to the targeted sites. Liposome can be absorbed and excreted by the human body, and is well known that no harm to the human body. This study aimed to compare the therapeutic effects between encapsulated (doxorubicin liposomal, LipoDox) and un-encapsulated (doxorubicin, Dox) anti-tumor drugs using Magnetic Resonance Imaging (MRI). Twenty-four New Zealand rabbits implanted with VX2 carcinoma at left thigh were classified into three groups: control group (untreated), Dox-treated group and LipoDox-treated group, 8 rabbits for each group. MRI scans were performed three days after tumor implantation. A 1.5T GE Signa HDxt whole body MRI scanner with a high resolution knee coil was used in this study. After a 3-plane localizer scan was performed, Three-Dimensional (3D) Fast Spin Echo (FSE) T2-Weighted Images (T2WI) was used for tumor volumetric quantification. And Two-Dimensional (2D) spoiled gradient recalled echo (SPGR) dynamic Contrast-enhanced (DCE) MRI was used for tumor perfusion evaluation. DCE-MRI was designed to acquire four baseline images, followed by contrast agent Gd-DOTA injection through the ear vein of rabbits. Afterwards, a series of 32 images were acquired to observe the signals change over time in the tumor and muscle. The MRI scanning was scheduled on a weekly basis for a period of four weeks to observe the tumor progression longitudinally. The Dox and LipoDox treatments were prescribed 3 times in the first week immediately after VX2 tumor implantation. ImageJ was used to quantitate tumor volume and time course signal enhancement on DCE images. The changes of tumor size showed that the growth of VX2 tumors was effectively inhibited for both LipoDox-treated and Dox-treated groups. Furthermore, the tumor volume of LipoDox-treated group was significantly lower than that of Dox-treated group, which implies that LipoDox has better therapeutic effect than Dox. The signal intensity of LipoDox-treated group is significantly lower than that of the other two groups, which implies that targeted therapeutic drug remained in the tumor tissue. This study provides a radiation-free and non-invasive MRI method for therapeutic monitoring of targeted liposome on an animal tumor model.

Keywords: doxorubicin, dynamic contrast-enhanced MRI, lipodox, magnetic resonance imaging, VX2 tumor model

Procedia PDF Downloads 445
249 Functionalization of the Surface of Porous Titanium Nickel Alloy

Authors: Gulsharat A. Baigonakova, Ekaterina S. Marchenko, Venera R. Luchsheva

Abstract:

The preferred materials for bone grafting are titanium-nickel alloys. They have a porous, permeable structure similar to that of bone tissue, can withstand long-term physiological stress in the body, and retain the scaffolding function for bone tissue ingrowth. Despite the excellent functional properties of these alloys, there is a possibility of post-operative infectious complications that prevent the newly formed bone tissue from filling the spaces created in a timely manner and prolong the rehabilitation period of patients. In order to minimise such consequences, it is necessary to use biocompatible materials capable of simultaneously fulfilling the function of a long-term functioning implant and an osteoreplacement carrier saturated with drugs. Methods to modify the surface by saturation with bioactive substances, in particular macrocyclic compounds, for the controlled release of drugs, biologically active substances, and cells are becoming increasingly important. This work is dedicated to the functionalisation of the surface of porous titanium nickelide by the deposition of macrocyclic compounds in order to provide titanium nickelide with antibacterial activity and accelerated osteogenesis. The paper evaluates the effect of macrocyclic compound deposition methods on the continuity, structure, and cytocompatibility of the surface properties of porous titanium nickelide. Macrocyclic compounds were deposited on the porous surface of titanium nickelide under the influence of various physical effects. Structural research methods have allowed the evaluation of the surface morphology of titanium nickelide and the nature of the distribution of these compounds. The method of surface functionalisation of titanium nickelide influences the size of the deposited bioactive molecules and the nature of their distribution. The surface functionalisation method developed has enabled titanium nickelide to be deposited uniformly on the inner and outer surfaces of the pores, which will subsequently enable the material to be uniformly saturated with various drugs, including antibiotics and inhibitors. The surface-modified porous titanium nickelide showed high biocompatibility and low cytotoxicity in in vitro studies. The research was carried out with financial support from the Russian Science Foundation under Grant No. 22-72-10037.

Keywords: biocompatibility, NiTi, surface, porous structure

Procedia PDF Downloads 64
248 Using Daily Light Integral Concept to Construct the Ecological Plant Design Strategy of Urban Landscape

Authors: Chuang-Hung Lin, Cheng-Yuan Hsu, Jia-Yan Lin

Abstract:

It is an indispensible strategy to adopt greenery approach on architectural bases so as to improve ecological habitats, decrease heat-island effect, purify air quality, and relieve surface runoff as well as noise pollution, all of which are done in an attempt to achieve sustainable environment. How we can do with plant design to attain the best visual quality and ideal carbon dioxide fixation depends on whether or not we can appropriately make use of greenery according to the nature of architectural bases. To achieve the goal, it is a need that architects and landscape architects should be provided with sufficient local references. Current greenery studies focus mainly on the heat-island effect of urban with large scale. Most of the architects still rely on people with years of expertise regarding the adoption and disposition of plantation in connection with microclimate scale. Therefore, environmental design, which integrates science and aesthetics, requires fundamental research on landscape environment technology divided from building environment technology. By doing so, we can create mutual benefits between green building and the environment. This issue is extremely important for the greening design of the bases of green buildings in cities and various open spaces. The purpose of this study is to establish plant selection and allocation strategies under different building sunshade levels. Initially, with the shading of sunshine on the greening bases as the starting point, the effects of the shades produced by different building types on the greening strategies were analyzed. Then, by measuring the PAR( photosynthetic active radiation), the relative DLI( daily light integral) was calculated, while the DLI Map was established in order to evaluate the effects of the building shading on the established environmental greening, thereby serving as a reference for plant selection and allocation. The discussion results were to be applied in the evaluation of environment greening of greening buildings and establish the “right plant, right place” design strategy of multi-level ecological greening for application in urban design and landscape design development, as well as the greening criteria to feedback to the eco-city greening buildings.

Keywords: daily light integral, plant design, urban open space

Procedia PDF Downloads 497
247 High Efficiency Solar Thermal Collectors Utilization in Process Heat: A Case Study of Textile Finishing Industry

Authors: Gökçen A. Çiftçioğlu, M. A. Neşet Kadırgan, Figen Kadırgan

Abstract:

Solar energy, since it is available every day, is seen as one of the most valuable renewable energy resources. Thus, the energy of sun should be efficiently used in various applications. The most known applications that use solar energy are heating water and spaces. High efficiency solar collectors need appropriate selective surfaces to absorb the heat. Selective surfaces (Selektif-Sera) used in this study are applied to flat collectors, which are produced by a roll to roll cost effective coating of nano nickel layers, developed in Selektif Teknoloji Co. Inc. Efficiency of flat collectors using Selektif-Sera absorbers are calculated in collaboration with Institute for Solar Technik Rapperswil, Switzerland. The main cause of high energy consumption in industry is mostly caused from low temperature level processes. There is considerable effort in research to minimize the energy use by renewable energy sources such as solar energy. A feasibility study will be presented to obtain the potential of solar thermal energy utilization in the textile industry using these solar collectors. For the feasibility calculations presented in this study, textile dyeing and finishing factory located at Kahramanmaras is selected since the geographic location was an important factor. Kahramanmaras is located in the south east part of Turkey thus has a great potential to have solar illumination much longer. It was observed that, the collector area is limited by the available area in the factory, thus a hybrid heating generating system (lignite/solar thermal) was preferred in the calculations of this study to be more realistic. During the feasibility work, the calculations took into account the preheating process, where well waters heated from 15 °C to 30-40 °C by using the hot waters in heat exchangers. Then the preheated water was heated again by high efficiency solar collectors. Economic comparison between the lignite use and solar thermal collector use was provided to determine the optimal system that can be used efficiently. The optimum design of solar thermal systems was studied depending on the optimum collector area. It was found that the solar thermal system is more economic and efficient than the merely lignite use. Return on investment time is calculated as 5.15 years.

Keywords: energy, renewable energy, selective surface, solar collector

Procedia PDF Downloads 188
246 A Complex Network Approach to Structural Inequality of Educational Deprivation

Authors: Harvey Sanchez-Restrepo, Jorge Louca

Abstract:

Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.

Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics

Procedia PDF Downloads 106
245 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction

Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini

Abstract:

Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.

Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable

Procedia PDF Downloads 266